Yuan2.0代码主要结构概览及三种并行方式实现

该代码结构如下图所示:

在initialize_megatron初始化megatron的过程中,有关于数据并行、流水线并行、张量并行的实现,简介及其实现如下:

模型分布式环境初始化:

以两台分别有8GPU服务器为例,训练具有12层的transformer layers

图一

图二

   本示例将模型纵向切割为4部分,每部分3layers,实现pipeline-parallel(流水线并行),将模型横向切割实现tensor-parallel(向量并行),把图二中的“123层”切割成两部分。

 

图三

上图说明了以model1为例,如何切割一个模型为八个部分,分别放入八个gpu的过程。

一个完整的模型model1的含义:

纵向三刀,把transformer layers的一共12层,切割成了四个部分,每个部分3layers,其目的是实现pipeline-parallel;【需要pipeline_model_parallel_size=4

横向的一刀,代表了tensor-parallel,是把(123)直到(101112)这样的每三层layers,都切割成上下两个部分。【需要tensor_model_parallel_size=2

tensor model-parallel groups:代表有多少个包含向量并行的groups,由图可知:

model1:[0, 1; 8, 9; 4, 5; 12, 13]

Model2:[2, 3; 10, 11; 6, 7; 14, 15]

对应代码示例中的:

8 tensor model-parallel groups:
    [g0, g1], [g2, g3], [g4, g5], [g6, g7], [g8, g9], [g10, g11], [g12, g13], [g14, g15]

 

pipeline model-parallel groups:代表有多少个包含流水线并行的模型,由图可知:

模型model1先纵向切割为4份为流水线并行关系,然后横向切分,故有两个groups,第一个,[0,4,8,12],第二个:[1,5,9,13]

同理model2。

 

data_parallel groups:数据并行groups,数据并行,是”含有相同参数的模型的子块“之间进行数据并行,有图可以看到两台服务器中的模型结构,(02相同),(13相同),46相同),对应代码示例中的:

8 data_parallel groups:
    [g0, g2], [g1, g3], [g4, g6], [g5, g7], [g8, g10], [g9, g11], [g12, g14], [g13, g15]

代码实现:

initialize_model_parallel(
    tensor_model_parallel_size: int = 1,
    pipeline_model_parallel_size: int = 1,
    virtual_pipeline_model_parallel_size: Optional[int] = None,
    pipeline_model_parallel_split_rank: Optional[int] = None,
    use_fp8: bool = False,
)
tensor_model_parallel_size = 4
pipeline_model_parallel_size = 2
world_size = 16
data_parallel_size: int = world_size // (tensor_model_parallel_size * pipeline_model_parallel_size)
num_tensor_model_parallel_groups: int = world_size // tensor_model_parallel_size = 4
num_pipeline_model_parallel_groups: int = world_size // pipeline_model_parallel_size = 8

# Build the data-parallel groups.
#构建数据并行groups
all_data_parallel_group_ranks = []
for i in range(pipeline_model_parallel_size):
    start_rank = i * num_pipeline_model_parallel_groups
    end_rank = (i + 1) * num_pipeline_model_parallel_groups
for i in range(pipeline_model_parallel_size):
    start_rank = i * num_pipeline_model_parallel_groups
    end_rank = (i + 1) * num_pipeline_model_parallel_groups
    for j in range(tensor_model_parallel_size):
        ranks = range(start_rank + j, end_rank, tensor_model_parallel_size)
        all_data_parallel_group_ranks.append(list(ranks))
        group = torch.distributed.new_group(ranks)
        group_gloo = torch.distributed.new_group(ranks, backend="gloo")
        if rank in ranks:
            _DATA_PARALLEL_GROUP = group
            _DATA_PARALLEL_GROUP_GLOO = group_gloo
            _DATA_PARALLEL_GLOBAL_RANKS = ranks
print(all_data_parallel_group_ranks)

all_data_parallel_group_ranks
[[0, 2], [1, 3], [4, 6], [5, 7], [8, 10], [9, 11], [12, 14], [13, 15]]
# Build the model-parallel groups.
#构建模型并行占用groups,即模型占用了哪些GPU
global _MODEL_PARALLEL_GROUP
assert _MODEL_PARALLEL_GROUP is None, 'model parallel group is already initialized'
for i in range(data_parallel_size):
    ranks = [data_parallel_group_ranks[i] for data_parallel_group_ranks in all_data_parallel_group_ranks]
    group = torch.distributed.new_group(ranks)
    print(ranks)
    if rank in ranks:
        _MODEL_PARALLEL_GROUP = group

ranks
[0, 1, 4, 5, 8, 9, 12, 13]
[2, 3, 6, 7, 10, 11, 14, 15]
# Build the tensor model-parallel groups.
#构建张量并行groups
global _TENSOR_MODEL_PARALLEL_GROUP
assert _TENSOR_MODEL_PARALLEL_GROUP is None, 'tensor model parallel group is already initialized'
for i in range(num_tensor_model_parallel_groups):
    ranks = range(i * tensor_model_parallel_size, (i + 1) * tensor_model_parallel_size)
    group = torch.distributed.new_group(ranks)
    print(ranks)
    if rank in ranks:
        _TENSOR_MODEL_PARALLEL_GROUP = group

[0, 1]
[2, 3]
[4, 5]
[6, 7]
[8, 9]
[10, 11]
[12, 13]
[14, 15]
# Build the pipeline model-parallel groups and embedding groups
#构建流水线并行groups和embedding groups
for i in range(num_pipeline_model_parallel_groups):
    ranks = range(i, world_size, num_pipeline_model_parallel_groups)
    print(ranks)
    group = torch.distributed.new_group(ranks)
    if rank in ranks:
        _PIPELINE_MODEL_PARALLEL_GROUP = group
        _PIPELINE_GLOBAL_RANKS = ranks
    # Setup embedding group (to exchange gradients between
    # first and last stages).
    if len(ranks) > 1:
        embedding_ranks = [ranks[0], ranks[-1]]
        position_embedding_ranks = [ranks[0]]
        print(embedding_ranks)
        print(position_embedding_ranks)
        if pipeline_model_parallel_split_rank is not None:
            if ranks[pipeline_model_parallel_split_rank] not in embedding_ranks:
                embedding_ranks = [ranks[0], ranks[pipeline_model_parallel_split_rank], ranks[-1]]
            if ranks[pipeline_model_parallel_split_rank] not in position_embedding_ranks:
                position_embedding_ranks = [ranks[0], ranks[pipeline_model_parallel_split_rank]]
    else:
        embedding_ranks = ranks
        position_embedding_ranks = ranks

    group = torch.distributed.new_group(embedding_ranks)
    if rank in embedding_ranks:
        _EMBEDDING_GROUP = group
    if rank in ranks:
        _EMBEDDING_GLOBAL_RANKS = embedding_ranks

    group = torch.distributed.new_group(position_embedding_ranks)
    if rank in position_embedding_ranks:
        _POSITION_EMBEDDING_GROUP = group
    if rank in ranks:
        _POSITION_EMBEDDING_GLOBAL_RANKS = position_embedding_ranks
运行结果:
[0, 4, 8, 12]
[0, 12]
[0]
[1, 5, 9, 13]
[1, 13]
[1]
[2, 6, 10, 14]
[2, 14]
[2]
[3, 7, 11, 15]
[3, 15]
[3]

参考:

https://zhuanlan.zhihu.com/p/470279673

 

模型分布式环境初始化:

以两台分别有8GPU服务器为例,训练具有12层的transformer layers

posted @ 2024-01-19 17:46  sunshine丶23  阅读(20)  评论(0编辑  收藏  举报