fix table
This commit is contained in:
parent
16811d0efe
commit
58586112c1
|
@ -66,6 +66,8 @@ Below is a display of the model's multi-turn dialogue ability regarding code:
|
|||
- The peft library is introduced to **support training such as lora**.
|
||||
|
||||
- The following table compares the training speed of Open-Llama and the original Llama, and the performance data of Llama is quoted from the original Llama paper.
|
||||
|
||||
|
||||
| | DeepSpeed Stage | Offload | Activation Checkpoint | Total Token | GPU hours | Speed token/s/gpu | Batch Size | CPU Memory |
|
||||
|----------------|-----------------|---------|-----------------------|-------------|-----------|-------------------|------------|------------|
|
||||
| Open-Llama 7B | 1 | False | False | 173.7B | 13412 | 3587 | 2 | 94G |
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
* @Author: LiangSong(sl12160010@gmail.com)
|
||||
* @Date: 2023-03-10 21:18:35
|
||||
* @LastEditors: LiangSong(sl12160010@gmail.com)
|
||||
* @LastEditTime: 2023-05-08 22:28:40
|
||||
* @LastEditTime: 2023-05-08 22:29:53
|
||||
* @FilePath: /Open-Llama/README_zh.md
|
||||
* @Description:
|
||||
*
|
||||
|
@ -67,6 +67,8 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
|
|||
- 引入peft库**支持lora**等训练。
|
||||
|
||||
- 下表对比了Open-Llama和Llama原文的训练速度,Llama性能数据引自Llama原文。
|
||||
|
||||
|
||||
| | DeepSpeed Stage | Offload | Activation Checkpoint | Total Token | GPU hours | Speed token/s/gpu | Batch Size | CPU Memory |
|
||||
|----------------|-----------------|---------|-----------------------|-------------|-----------|-------------------|------------|------------|
|
||||
| Open-Llama 7B | 1 | False | False | 173.7B | 13412 | 3587 | 2 | 94G |
|
||||
|
|
Loading…
Reference in New Issue
Block a user