Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
-
Updated
May 23, 2024 - Python
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
LLM (Large Language Model) FineTuning
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
Collection of resources for finetuning Large Language Models (LLMs).
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
Enhancing Large Vision Language Models with Self-Training on Image Comprehension.
npm like package ecosystem for Prompts 🤖
GenAssist combines orchestration, runtime, analytics, and learning — in one open platform.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Sustain-LC is a benchmarking environment for traditional and reinforcement learning based controls as well as LLM based control
A distributed training framework for large language models powered by Lightning.
Distributed Reinforcement Learning for LLM Fine-Tuning with multi-GPU utilization
Collecting data for Building Lucknow's first LLM
中文llama3大模型快速上手,通用中文语言大模型finetune教程,基于Meta-llama3实现。
The official repo of paper "Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller"
Solving catastrophic forgetting with Recursive Time architecture, Active Sleep (generative replay), and Temporal LoRA. Proving the "Lazarus Effect" in neural networks.
This repository contains code associated with Neuro-LIFT: A Neuromorphic, LLM-based Interactive Framework for Autonomous Drone FlighT at the Edge
Finetuning Some Wizard Models With QLoRA
LLM Finetuning with Axolotl with decent defaults + Optional TrueFoundry Experiment Tracking Extension
Code for [ICML 2025] Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."