Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
B
BTS-MTGNN
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
zhlj
BTS-MTGNN
Commits
a3cc8ba3
Commit
a3cc8ba3
authored
Mar 17, 2025
by
zlj
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix config
parent
81de4b74
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
6 additions
and
4 deletions
+6
-4
config/JODIE_large.yml
+2
-1
examples/test_all.sh
+1
-1
starrygl/module/utils.py
+3
-2
No files found.
config/JODIE_large.yml
View file @
a3cc8ba3
...
...
@@ -21,6 +21,6 @@ gnn:
train
:
-
epoch
:
50
batch_size
:
3000
lr
:
0.00
04
lr
:
0.00
16
dropout
:
0.1
all_on_gpu
:
True
\ No newline at end of file
examples/test_all.sh
View file @
a3cc8ba3
...
...
@@ -19,7 +19,7 @@ memory_type=("historical")
#memory_type=("local" "all_update" "historical" "all_reduce")
shared_memory_ssim
=(
"0.3"
)
#data_param=("WIKI" "REDDIT" "LASTFM" "WikiTalk")
data_param
=(
"
LASTFM"
"WikiTalk"
"
StackOverflow"
"GDELT"
)
data_param
=(
"StackOverflow"
"GDELT"
)
# "StackOverflow" "GDELT")
#"GDELT")
#data_param=("WIKI" "REDDIT" "LASTFM" "DGraphFin" "WikiTalk" "StackOverflow")
...
...
starrygl/module/utils.py
View file @
a3cc8ba3
...
...
@@ -167,12 +167,13 @@ class AdaParameter:
#print(self.alpha)
self
.
beta
=
max
(
min
(
self
.
beta
,
self
.
max_beta
),
self
.
min_beta
)
self
.
alpha
=
max
(
min
(
self
.
alpha
,
self
.
max_alpha
),
self
.
min_alpha
)
#print(self.count_fetch,self.count_memory_update,self.count_gnn_aggregate,self.count_memory_sync)
#print(self.beta,self.alpha)
ctx
=
DistributedContext
.
get_default_context
()
beta_comm
=
torch
.
tensor
([
self
.
beta
])
beta_comm
=
torch
.
tensor
([
self
.
beta
]
,
dtype
=
torch
.
float
)
torch
.
distributed
.
all_reduce
(
beta_comm
,
group
=
ctx
.
gloo_group
)
self
.
beta
=
beta_comm
[
0
]
.
item
()
/
ctx
.
world_size
alpha_comm
=
torch
.
tensor
([
self
.
alpha
])
alpha_comm
=
torch
.
tensor
([
self
.
alpha
]
,
dtype
=
torch
.
float
)
torch
.
distributed
.
all_reduce
(
alpha_comm
,
group
=
ctx
.
gloo_group
)
self
.
alpha
=
alpha_comm
[
0
]
.
item
()
/
ctx
.
world_size
#print('gnn aggregate {} fetch {} memory sync {} memory update {}'.format(average_gnn_aggregate,average_fetch,average_memory_sync_time,average_memory_update_time))
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment