1 Star 2 Fork 0

hiyouga / LLaMA-Factory

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

# LLaMA Factory

GitHub Code License PyPI Citation Discord Twitter Spaces Studios Open in Colab

[ English | 中文 ]

目录

项目特色

  • 多种模型:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。
  • 集成方法:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练和 ORPO 训练。
  • 多种精度:32 比特全参数微调、16 比特冻结微调、16 比特 LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8 的 2/4/8 比特 QLoRA 微调。
  • 先进算法:GaLore、BAdam、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ 和 Agent 微调。
  • 实用技巧:FlashAttention-2、Unsloth、RoPE scaling、NEFTune 和 rsLoRA。
  • 实验监控:LlamaBoard、TensorBoard、Wandb、MLflow 等等。
  • 极速推理:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。

性能指标

与 ChatGLM 官方的 P-Tuning 微调相比,LLaMA Factory 的 LoRA 微调提供了 3.7 倍的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。

benchmark

变量定义
  • Training Speed: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024)
  • Rouge Score: 广告文案生成任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024)
  • GPU Memory: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024)
  • 我们在 ChatGLM 的 P-Tuning 中采用 pre_seq_len=128,在 LLaMA Factory 的 LoRA 微调中采用 lora_rank=32

更新日志

[24/05/18] 我们支持了 KTO 偏好对齐算法。详细用法请参照 examples

[24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅安装部分。

[24/05/13] 我们支持了 Yi-1.5 系列模型的微调。

展开日志

[24/04/26] 我们支持了多模态模型 LLaVA-1.5 的微调。详细用法请参照 examples

[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 Colab 笔记本。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 Llama3-8B-Chinese-ChatLlama3-Chinese

[24/04/21] 我们基于 AstraMindAI 的仓库支持了 混合深度训练。详细用法请参照 examples

[24/04/16] 我们支持了 BAdam。详细用法请参照 examples

[24/04/16] 我们支持了 unsloth 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 117% 的训练速度和 50% 的显存节约。更多数据请见此页面

[24/03/31] 我们支持了 ORPO。详细用法请参照 examples

[24/03/21] 我们的论文 "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models" 可在 arXiv 上查看!

[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 FSDP+QLoRA。详细用法请参照 examples

[24/03/13] 我们支持了 LoRA+。详细用法请参照 examples

[24/03/07] 我们支持了梯度低秩投影(GaLore)算法。详细用法请参照 examples

[24/03/07] 我们集成了 vLLM 以实现极速并发推理。请使用 infer_backend: vllm 来获得 270% 的推理速度。

[24/02/28] 我们支持了 DoRA 微调。请使用 use_dora: true 参数进行 DoRA 微调。

[24/02/15] 我们支持了 LLaMA Pro 提出的块扩展方法。详细用法请参照 examples

[24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该博客页面

[24/01/18] 我们针对绝大多数模型实现了 Agent 微调,微调时指定 dataset: glaive_toolcall 即可使模型获得工具调用能力。

[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 unsloth 的 LoRA 训练加速。请使用 use_unsloth: true 参数启用 unsloth 优化。该方法可提供 170% 的训练速度,详情请查阅此页面

[23/12/12] 我们支持了微调最新的混合专家模型 Mixtral 8x7B。硬件需求请查阅此处

[23/12/01] 我们支持了从 魔搭社区 下载预训练模型和数据集。详细用法请参照 此教程

[23/10/21] 我们支持了 NEFTune 训练技巧。请使用 neftune_noise_alpha: 5 参数启用 NEFTune。

[23/09/27] 我们针对 LLaMA 模型支持了 LongLoRA 提出的 $S^2$-Attn。请使用 shift_attn: true 参数以启用该功能。

[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 examples

[23/09/10] 我们支持了 FlashAttention-2。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 flash_attn: fa2 参数以启用 FlashAttention-2。

[23/08/12] 我们支持了 RoPE 插值来扩展 LLaMA 模型的上下文长度。请使用 rope_scaling: linear 参数训练模型或使用 rope_scaling: dynamic 参数评估模型。

[23/08/11] 我们支持了指令模型的 DPO 训练。详细用法请参照 examples

[23/07/31] 我们支持了数据流式加载。请使用 streaming: truemax_steps: 10000 参数来流式加载数据集。

[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目(LLaMA-2 / Baichuan)。

[23/07/18] 我们开发了支持训练和测试的浏览器一体化界面。请使用 train_web.py 在您的浏览器中微调模型。感谢 @KanadeSiina@codemayq 在该功能开发中付出的努力。

[23/07/09] 我们开源了 FastEdit ⚡🩹,一个简单易用的、能迅速编辑大模型事实记忆的工具包。如果您感兴趣请关注我们的 FastEdit 项目。

[23/06/29] 我们提供了一个可复现的指令模型微调示例,详细内容请查阅 Baichuan-7B-sft

[23/06/22] 我们对齐了示例 APIOpenAI API 的格式,您可以将微调模型接入任意基于 ChatGPT 的应用中。

[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 QLoRA)。详细用法请参照 examples

模型

模型名 模型大小 默认模块 Template
Baichuan2 7B/13B W_pack baichuan2
BLOOM 560M/1.1B/1.7B/3B/7.1B/176B query_key_value -
BLOOMZ 560M/1.1B/1.7B/3B/7.1B/176B query_key_value -
ChatGLM3 6B query_key_value chatglm3
Command-R 35B/104B q_proj,v_proj cohere
DeepSeek (MoE) 7B/16B/67B/236B q_proj,v_proj deepseek
Falcon 7B/11B/40B/180B query_key_value falcon
Gemma/CodeGemma 2B/7B q_proj,v_proj gemma
InternLM2 7B/20B wqkv intern2
LLaMA 7B/13B/33B/65B q_proj,v_proj -
LLaMA-2 7B/13B/70B q_proj,v_proj llama2
LLaMA-3 8B/70B q_proj,v_proj llama3
LLaVA-1.5 7B/13B q_proj,v_proj vicuna
Mistral/Mixtral 7B/8x7B/8x22B q_proj,v_proj mistral
OLMo 1B/7B q_proj,v_proj -
Phi-1.5/2 1.3B/2.7B q_proj,v_proj -
Phi-3 3.8B qkv_proj phi
Qwen 1.8B/7B/14B/72B c_attn qwen
Qwen1.5 (Code/MoE) 0.5B/1.8B/4B/7B/14B/32B/72B/110B q_proj,v_proj qwen
StarCoder2 3B/7B/15B q_proj,v_proj -
XVERSE 7B/13B/65B q_proj,v_proj xverse
Yi (1/1.5) 6B/9B/34B q_proj,v_proj yi
Yi-VL 6B/34B q_proj,v_proj yi_vl
Yuan 2B/51B/102B q_proj,v_proj yuan

[!NOTE] 默认模块应作为 --lora_target 参数的默认值,可使用 --lora_target all 参数指定全部模块以取得更好的效果。

对于所有“基座”(Base)模型,--template 参数可以是 default, alpaca, vicuna 等任意值。但“对话”(Instruct/Chat)模型请务必使用对应的模板

请务必在训练和推理时使用完全一致的模板。

项目所支持模型的完整列表请参阅 constants.py

您也可以在 template.py 中添加自己的对话模板。

训练方法

方法 全参数训练 部分参数训练 LoRA QLoRA
预训练 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
指令监督微调 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
奖励模型训练 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
PPO 训练 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
DPO 训练 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
KTO 训练 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
ORPO 训练 :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:

数据集

预训练数据集
指令微调数据集
偏好数据集

部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。

pip install --upgrade huggingface_hub
huggingface-cli login

软硬件依赖

必需项 至少 推荐
python 3.8 3.10
torch 1.13.1 2.2.0
transformers 4.37.2 4.40.1
datasets 2.14.3 2.19.1
accelerate 0.27.2 0.30.0
peft 0.9.0 0.10.0
trl 0.8.1 0.8.6
可选项 至少 推荐
CUDA 11.6 12.2
deepspeed 0.10.0 0.14.0
bitsandbytes 0.39.0 0.43.1
vllm 0.4.0 0.4.2
flash-attn 2.3.0 2.5.8

硬件依赖

* 估算值

方法 精度 7B 13B 30B 70B 110B 8x7B 8x22B
Full AMP 120GB 240GB 600GB 1200GB 2000GB 900GB 2400GB
Full 16 60GB 120GB 300GB 600GB 900GB 400GB 1200GB
Freeze 16 20GB 40GB 80GB 200GB 360GB 160GB 400GB
LoRA/GaLore/BAdam 16 16GB 32GB 64GB 160GB 240GB 120GB 320GB
QLoRA 8 10GB 20GB 40GB 80GB 140GB 60GB 160GB
QLoRA 4 6GB 12GB 24GB 48GB 72GB 30GB 96GB
QLoRA 2 4GB 8GB 16GB 24GB 48GB 18GB 48GB

如何使用

安装 LLaMA Factory

[!IMPORTANT] 此步骤为必需。

git clone https://gitee.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .[torch,metrics]

可选的额外依赖项:torch、metrics、deepspeed、bitsandbytes、vllm、galore、badam、gptq、awq、aqlm、qwen、modelscope、quality

[!TIP] 遇到包冲突时,可使用 pip install --no-deps -e . 解决。

Windows 用户指南

如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 bitsandbytes 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的发布版本

pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl

如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 flash-attn 库,支持 CUDA 12.1 到 12.2,请根据需求到 flash-attention 下载对应版本安装。

昇腾 NPU 用户指南

如果使用昇腾 NPU 设备进行(分布式)训练或推理,需要安装 torch-npu 库和 Ascend CANN Kernels

依赖项 至少 推荐
CANN 8.0.RC1 8.0.RC1
torch 2.2.0 2.2.0
torch-npu 2.2.0 2.2.0
deepspeed 0.13.2 0.13.2

Docker 镜像:

请记得使用 ASCEND_RT_VISIBLE_DEVICES 而非 CUDA_VISIBLE_DEVICES 来指定您使用的设备。

如果遇到无法正常推理的情况,请尝试设置 do_sample: false

数据准备

关于数据集文件的格式,请参考 data/README_zh.md 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。

[!NOTE] 使用自定义数据集时,请更新 data/dataset_info.json 文件。

快速开始

下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA 微调推理合并

CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml

高级用法请参考 examples/README_zh.md(包括多 GPU 微调)。

[!TIP] 使用 llamafactory-cli help 显示帮助信息。

LLaMA Board 可视化微调(由 Gradio 驱动)

[!IMPORTANT] LLaMA Board 可视化界面目前仅支持单 GPU 训练。

使用本地环境

CUDA_VISIBLE_DEVICES=0 GRADIO_SHARE=1 llamafactory-cli webui
阿里云 PAI 和 AutoDL 用户指南

如果您在阿里云 PAI 上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量:

export GRADIO_SERVER_PORT=7860 GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/

如果您正在使用 AutoDL,请安装下述 Gradio 版本:

pip install gradio==4.10.0

使用 Docker

docker build -f ./Dockerfile -t llama-factory:latest .
docker run --gpus=all \
    -v ./hf_cache:/root/.cache/huggingface/ \
    -v ./data:/app/data \
    -v ./output:/app/output \
    -e CUDA_VISIBLE_DEVICES=0 \
    -p 7860:7860 \
    --shm-size 16G \
    --name llama_factory \
    -d llama-factory:latest

使用 Docker Compose

docker compose -f ./docker-compose.yml up -d
数据卷详情
  • hf_cache:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。
  • data:宿主机中存放数据集的文件夹路径。
  • output:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。

利用 vLLM 部署 OpenAI API

CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml

从魔搭社区下载

如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。

export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`

--model_name_or_path 设置为模型 ID 来加载对应的模型。在魔搭社区查看所有可用的模型,例如 LLM-Research/Meta-Llama-3-8B-Instruct

使用了 LLaMA Factory 的项目

如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。

点击显示
  1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [arxiv]
  2. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [arxiv]
  3. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [arxiv]
  4. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [arxiv]
  5. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [arxiv]
  6. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. 2024. [arxiv]
  7. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. 2024. [arxiv]
  8. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [arxiv]
  9. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [arxiv]
  10. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [arxiv]
  11. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [arxiv]
  12. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [arxiv]
  13. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [arxiv]
  14. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. 2024. [arxiv]
  15. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [arxiv]
  16. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [arxiv]
  17. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [arxiv]
  18. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. 2024. [arxiv]
  19. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [arxiv]
  20. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [arxiv]
  21. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [arxiv]
  22. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [arxiv]
  23. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [arxiv]
  24. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [arxiv]
  25. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. 2024. [arxiv]
  26. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [arxiv]
  27. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [arxiv]
  28. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [arxiv]
  29. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [arxiv]
  30. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. 2024. [arxiv]
  31. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [arxiv]
  32. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [arxiv]
  33. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [arxiv]
  34. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [arxiv]
  35. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [arxiv]
  36. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. 2024. [arxiv]
  37. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. 2024. [arxiv]
  38. StarWhisper: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
  39. DISC-LawLLM: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
  40. Sunsimiao: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
  41. CareGPT: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
  42. MachineMindset:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
  43. Luminia-13B-v3:一个用于生成 Stable Diffusion 提示词的大型语言模型。[🤗Demo]
  44. Chinese-LLaVA-Med:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。

协议

本仓库的代码依照 Apache-2.0 协议开源。

使用模型权重时,请遵循对应的模型协议:Baichuan2 / BLOOM / ChatGLM3 / Command-R / DeepSeek / Falcon / Gemma / InternLM2 / LLaMA / LLaMA-2 (LLaVA-1.5) / LLaMA-3 / Mistral / OLMo / Phi-1.5/2 / Phi-3 / Qwen / StarCoder2 / XVERSE / Yi / Yi-1.5 / Yuan

引用

如果您觉得此项目有帮助,请考虑以下列格式引用

@article{zheng2024llamafactory,
  title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models}, 
  author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Yongqiang Ma},
  journal={arXiv preprint arXiv:2403.13372},
  year={2024},
  url={http://arxiv.org/abs/2403.13372}
}

致谢

本项目受益于 PEFTTRLQLoRAFastChat,感谢以上诸位作者的付出。

Star History

Star History Chart

Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

大语言模型统一高效微调框架 展开 收起
Python
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Python
1
https://gitee.com/hiyouga/LLaMA-Factory.git
git@gitee.com:hiyouga/LLaMA-Factory.git
hiyouga
LLaMA-Factory
LLaMA-Factory
master

搜索帮助