Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
S
starrygl-DynamicHistory
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
zhlj
starrygl-DynamicHistory
Commits
88de1d9c
Commit
88de1d9c
authored
Dec 21, 2023
by
Wenjie Huang
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
SquencePipe support long type of tensors
parent
32fec45c
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
27 additions
and
1 deletions
+27
-1
cora.py
+1
-1
starrygl/distributed/cclib.py
+6
-0
starrygl/parallel/sequence.py
+0
-0
starrygl/parallel/utils.py
+20
-0
No files found.
cora.py
View file @
88de1d9c
...
...
@@ -4,7 +4,7 @@ from torch_geometric.utils import add_remaining_self_loops, to_undirected
import
os.path
as
osp
import
sys
from
starrygl.
graph
import
GraphData
from
starrygl.
data
import
GraphData
import
logging
logging
.
getLogger
()
.
setLevel
(
logging
.
INFO
)
...
...
starrygl/distributed/cclib.py
View file @
88de1d9c
...
...
@@ -149,6 +149,9 @@ def batch_send(
group
:
Any
=
None
,
async_op
:
bool
=
False
,
):
if
len
(
tensors
)
==
0
:
return
BatchWork
(
None
,
None
)
# tensors = tuple(t.data for t in tensors)
backend
=
dist
.
get_backend
(
group
)
...
...
@@ -171,6 +174,9 @@ def batch_recv(
group
:
Any
=
None
,
async_op
:
bool
=
False
,
):
if
len
(
tensors
)
==
0
:
return
BatchWork
(
None
,
None
)
# tensors = tuple(t.data for t in tensors)
backend
=
dist
.
get_backend
(
group
)
...
...
starrygl/parallel/sequence.py
View file @
88de1d9c
This diff is collapsed.
Click to expand it.
starrygl/parallel/utils.py
0 → 100644
View file @
88de1d9c
import
torch
import
torch.nn
as
nn
import
torch.distributed
as
dist
from
torch
import
Tensor
from
typing
import
*
__all__
=
[
"all_reduce_gradients"
,
"all_reduce_buffers"
,
]
def
all_reduce_gradients
(
net
:
nn
.
Module
,
op
=
dist
.
ReduceOp
.
SUM
,
group
=
None
):
for
p
in
net
.
parameters
():
dist
.
all_reduce
(
p
.
grad
,
op
=
op
,
group
=
group
)
def
all_reduce_buffers
(
net
:
nn
.
Module
,
op
=
dist
.
ReduceOp
.
AVG
,
group
=
None
):
for
b
in
net
.
buffers
():
dist
.
all_reduce
(
b
.
data
,
op
=
op
,
group
=
group
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment