From 0157b6938d547f64e257b55055b5aa3f6cfd3e7f Mon Sep 17 00:00:00 2001 From: LiangSong Date: Wed, 17 May 2023 22:45:04 +0700 Subject: [PATCH] update readme --- README.md | 5 +---- README_zh.md | 5 +---- 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index d9f3e81..3e6b8d4 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ * @Author: s-JoL(sl12160010@gmail.com) * @Date: 2023-03-10 21:18:35 * @LastEditors: s-JoL(sl12160010@gmail.com) - * @LastEditTime: 2023-05-17 22:21:07 + * @LastEditTime: 2023-05-17 22:44:35 * @FilePath: /Open-Llama/README.md * @Description: * @@ -35,10 +35,7 @@ Join [discord](https://discord.gg/TrKxrTpnab) to discuss the development of larg - **The training speed reaches 3620 tokens/s, faster than the 3370 tokens/s reported in the original Llama paper, reaching the current state-of-the-art level.** - To use the CheckPoint, first, install the latest version of Transformers with the following command: ``` python -pip install git+https://github.com/huggingface/transformers.git - from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("s-JoL/Open-Llama-V2", use_fast=False) diff --git a/README_zh.md b/README_zh.md index 02d7723..bd56330 100644 --- a/README_zh.md +++ b/README_zh.md @@ -2,7 +2,7 @@ * @Author: s-JoL(sl12160010@gmail.com) * @Date: 2023-03-10 21:18:35 * @LastEditors: s-JoL(sl12160010@gmail.com) - * @LastEditTime: 2023-05-17 22:20:48 + * @LastEditTime: 2023-05-17 22:43:46 * @FilePath: /Open-Llama/README_zh.md * @Description: * @@ -36,10 +36,7 @@ Open-Llama是一个开源项目,提供了一整套用于构建大型语言模 - **训练速度达到3620 token/s,快于Llama原文中的3370 token/s,达到目前sota的水平。** -使用ckpt需要先用下面命令安装最新版本Transformers: ``` python -pip install git+https://github.com/huggingface/transformers.git - from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("s-JoL/Open-Llama-V2", use_fast=False)