OpenFedLLM: Training Large Language Models on Decentralized Private Data
via Federated Learning
OpenFedLLM: Training Large Language Models on Decentralized Private Data
via Federated Learning
Trained on massive publicly available data, large language models (LLMs) have demonstrated tremendous success across various fields. While more data contributes to better performance, a disconcerting reality is that high-quality public data will be exhausted in a few years. In this paper, we offer a potential next step for contemporary …