update readme

This commit is contained in:
LiangSong 2023-04-29 20:30:24 +08:00
parent fc21a75d1e
commit 52cd09f664
2 changed files with 4 additions and 2 deletions

View File

@ -2,7 +2,7 @@
* @Author: LiangSong(sl12160010@gmail.com)
* @Date: 2023-03-10 21:18:35
* @LastEditors: LiangSong(sl12160010@gmail.com)
* @LastEditTime: 2023-04-29 12:30:47
* @LastEditTime: 2023-04-29 20:29:31
* @FilePath: /Open-Llama/README.md
* @Description:
*
@ -72,6 +72,7 @@ print(tokenizer.decode(pred.cpu()[0]).strip())
3. 统一预训练和指令微调训练入口为train_lm.py
4. 提供更方便的配置可见configs/pretrain_config.yaml
5. 提供基于其他预训练模型补充词表,继续预训练功能
6. 支持从中断点继续训练,包括加载优化器参数/学习率和跳过重复数据
[2023.4.16] Release v1.0

View File

@ -2,7 +2,7 @@
* @Author: LiangSong(sl12160010@gmail.com)
* @Date: 2023-03-10 21:18:35
* @LastEditors: LiangSong(sl12160010@gmail.com)
* @LastEditTime: 2023-04-29 12:31:00
* @LastEditTime: 2023-04-29 20:30:12
* @FilePath: /Open-Llama/README_en.md
* @Description:
*
@ -72,6 +72,7 @@ This update mainly includes the following aspects, increasing the effective trai
3. Unify the pre-training and instruction fine-tuning training entry to train_lm.py
4. Provide more convenient configuration, see configs/pretrain_config.yaml
5. Provide functionality to continue pre-training based on other pre-trained models and supplementing vocabulary
6. Resuming training from a checkpoint is supported, including loading optimizer parameters/learning rate and skipping duplicate data
[2023.4.16] Release v1.0