使用Horovod简化tensorflow多GPU训练

tensorflow多GPU训练在官网上有多种训练方式,用Horovod(Uber出品)可以极大简化多GPU训练,有code有真相,下面是基于Horovod的tf 多GPU训练代码:

import tensorflow as tf
import horovod.tensorflow as hvd

# Initialize Horovod
hvd.init()

# Pin GPU to be used to process local rank (one GPU per process)
config = tf.ConfigProto()
config.gpu_options.visible_device_list = str(hvd.local_rank())

# Build model…
loss = …
opt = tf.train.AdagradOptimizer(lr=0.01*hvd.size())

# Add Horovod Distributed Optimizer
opt = hvd.DistributedOptimizer(opt)

# Add hook to broadcast variables from rank 0 to all other processes during
# initialization.
hooks = [hvd.BroadcastGlobalVariablesHook(0)]

# Make training operation
train_op = opt.minimize(loss)

# The MonitoredTrainingSession takes care of session initialization,
# restoring from a checkpoint, saving to a checkpoint, and closing when done
# or an error occurs.
with tf.train.MonitoredTrainingSession(checkpoint_dir=/tmp/train_logs”,
                                      config=config,
                                      hooks=hooks) as mon_sess:
 while not mon_sess.should_stop():
   # Perform synchronous training.
   mon_sess.run(train_op)

扫码关注实用AI客栈,后台回复tfgpu,即可免费获取《tensorflow多GPU加速开发文档》一份
实用AI客栈

参考链接:
Multi GPU in keras

posted @ 2022-11-13 22:35  dlhl  阅读(78)  评论(0)    收藏  举报