update readme

This commit is contained in:
LiangSong 2023-05-17 22:45:04 +07:00
parent 95973b5de1
commit 0157b6938d
2 changed files with 2 additions and 8 deletions

View File

@ -2,7 +2,7 @@
* @Author: s-JoL(sl12160010@gmail.com)
* @Date: 2023-03-10 21:18:35
* @LastEditors: s-JoL(sl12160010@gmail.com)
* @LastEditTime: 2023-05-17 22:21:07
* @LastEditTime: 2023-05-17 22:44:35
* @FilePath: /Open-Llama/README.md
* @Description:
*
@ -35,10 +35,7 @@ Join [discord](https://discord.gg/TrKxrTpnab) to discuss the development of larg
- **The training speed reaches 3620 tokens/s, faster than the 3370 tokens/s reported in the original Llama paper, reaching the current state-of-the-art level.**
To use the CheckPoint, first, install the latest version of Transformers with the following command:
``` python
pip install git+https://github.com/huggingface/transformers.git
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("s-JoL/Open-Llama-V2", use_fast=False)

View File

@ -2,7 +2,7 @@
* @Author: s-JoL(sl12160010@gmail.com)
* @Date: 2023-03-10 21:18:35
* @LastEditors: s-JoL(sl12160010@gmail.com)
* @LastEditTime: 2023-05-17 22:20:48
* @LastEditTime: 2023-05-17 22:43:46
* @FilePath: /Open-Llama/README_zh.md
* @Description:
*
@ -36,10 +36,7 @@ Open-Llama是一个开源项目提供了一整套用于构建大型语言模
- **训练速度达到3620 token/s快于Llama原文中的3370 token/s达到目前sota的水平。**
使用ckpt需要先用下面命令安装最新版本Transformers
``` python
pip install git+https://github.com/huggingface/transformers.git
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("s-JoL/Open-Llama-V2", use_fast=False)