Peft Fine Tuning
AGood๐ค AI Models ยท by desperado991128
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
Install
openclaw plugins install desperado991128/peftOn ClawHub, this skill has no security information.
ClawStack independently scans every skill for permissions, network requests, author reputation, and more. Learn how we score โ
Security Analysis
Score: 88/100Reviews (12)
No reviews yet
Be the first to review this skill!
Details
- Author
- @desperado991128
- Source
- GitHub
- ClawHub
- View
- Category
- ๐ค AI Models
- Rating
- โ 4.6 (12)
Similar Skills
Parquet Converter
Data storage and processing challenges:
Llmrouter
Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.
Ollama Local
Manage and use local Ollama models. Use for model management (list/pull/remove), chat/completions, embeddings, and tool-use with local LLMs. Covers OpenClaw sub-agent integration and model selection guidance.