add hardward configuration to readme
This commit is contained in:
parent
f3c664bde3
commit
676dcfd995
|
@ -88,6 +88,7 @@ v1版代码可见https://github.com/s-JoL/Open-Llama/tree/v1.0
|
|||
- 特殊版本的[Transformers库](https://github.com/Bayes-Song/transformers)
|
||||
- [Accelerate库](https://huggingface.co/docs/accelerate/index)
|
||||
- CUDA 11.6 或更高版本(用于 GPU 加速)
|
||||
- 硬件配置:目前使用(64 CPU, 1000G Memory, 8xA100-80G) x N,有个比较神奇的现象当使用更多cpu时反而会慢一点,猜测这和dataloader的多进程有一定关系。
|
||||
|
||||
## **入门指南**
|
||||
### 安装
|
||||
|
|
|
@ -90,6 +90,7 @@ When training language models, our goal is to build a versatile model that can h
|
|||
- Special version of [Transformers library](https://github.com/Bayes-Song/transformers)
|
||||
- [Accelerate library](https://huggingface.co/docs/accelerate/index)
|
||||
- CUDA 11.6 or higher (for GPU acceleration)
|
||||
- Hardware configuration: currently using (64 CPU, 1000G Memory, 8xA100-80G) x N. There is a rather curious phenomenon that when more CPUs are used, the system runs slightly slower. I speculate this may have something to do with the multi-processing of dataloader.
|
||||
|
||||
## **Getting Started**
|
||||
### Installation
|
||||
|
|
Loading…
Reference in New Issue
Block a user