您当前访问的的浏览器版本过低,为了给您带来更好的体验,建议您升级至Edge浏览器或者推荐使用Google浏览器
取消
高效训练百万亿参数预训练模型的系统挑战和对策
发布时间:2022-04-08  作者:马子轩, 翟季冬, 韩文弢, 陈文光, 郑纬民  阅读量:

高效训练百万亿参数预训练模型的系统挑战和对策

马子轩, 翟季冬, 韩文弢, 陈文光, 郑纬民

(清华大学,中国 北京 100083

摘要:随着预训练模型规模的急剧增长,训练此类模型需要海量的计算和存储能力。为此,本工作在国产E级高性能计算机上训练了一个174万亿参数的超大规模预训练模型,模型参数量可与人脑中的突触数量相媲美。重点讨论在训练这一超大规模预训练模型中遇到的几个关键系统挑战:如何选取高效并行策略,如何进行高效数据存储,如何选取合适的数据精度,以及如何实现动态负载均衡,并总结了针对上述挑战的一些解决方法。
关键词:人工智能;超级计算机;混合专家模型;异构系统


Challenges and Measures for Efficient Training of Trillion-Parameter Pre-Trained Models

MA Zixuan, ZHAI Jidong, HAN Wentao, CHEN Wenguang, ZHENG Weimin

(Tsinghua University, Beijing 100083, China)

Abstract: As the size of pre-trained artificial intelligence models grows dramatically each year, training such models requires massive computing and memory capabilities. To this end, an unprecedentedly large-scale pre-trained model with 174 trillion parameters on an entire exascale supercomputer is proposed, which rivals the number of synapses in a human brain. The key challenges encountered in such large-scale model training, including deciding efficient parallel strategy, performing efficient data storage, deciding appropriate data precision, and dynamic load balancing are proposed. Then the solutions to the above challenges are summarized.
Keywords: artificial intelligence; supercomputer; mixture of experts; heterogeneous architecture

在线PDF浏览: PDF
本期相关文章