风险评分

94/100 (Very Low)

OpenClaw: benign
VirusTotal: benign
StaticScan: unknown

Peft Fine Tuning

作者: Desperado991128
Slug:peft
版本:0.1.0
更新时间:2026-02-26 07:45:36
风险信息

OpenClaw: benign

查看 OpenClaw 分析摘要(前 200 字预览)
The skill is an instruction-only guide for PEFT/LoRA/QLoRA fine-tuning and its requirements and instructions are coherent with that purpose; nothing in the bundle requests unrelated credentials or tri...

[内容已截断]

VirusTotal: benign VT 报告

静态扫描: unknown

README

README 未提供

文件列表

无文件信息

下载
下载官方 ZIP
原始 JSON 数据
{
    "latestVersion": {
        "_creationTime": 1769438898184,
        "_id": "k9787xem1w36zcv8nz7b0qkh2x7zyd2h",
        "changelog": "- Initial release of parameter-efficient fine-tuning (PEFT) support for large language models (LLMs), including LoRA, QLoRA, and 25+ adapter methods.\n- Enables fine-tuning of 7B–70B models on consumer GPUs by training less than 1% of model parameters, with adapters as small as 6MB.\n- Provides memory-optimized workflows for single-GPU fine-tuning of even the largest models using quantization (QLoRA).\n- Integrates fully with the HuggingFace transformers ecosystem and official PEFT library.\n- Includes practical guides, recommended settings, and code for adapter training, merging, and multi-adapter serving.\n- Offers architecture-specific configuration and compares leading parameter-efficient fine-tuning methods.",
        "changelogSource": "auto",
        "createdAt": 1769438898184,
        "version": "0.1.0"
    },
    "owner": {
        "_creationTime": 0,
        "_id": "publishers:missing",
        "displayName": "Desperado991128",
        "handle": "desperado991128",
        "image": "https:\/\/avatars.githubusercontent.com\/u\/54814928?v=4",
        "kind": "user",
        "linkedUserId": "kn78psy69jtnzswj9trx5zknps7zzh0p"
    },
    "ownerHandle": "desperado991128",
    "skill": {
        "_creationTime": 1769438898184,
        "_id": "kd78fzeqxcdbv1q3swdycbvaps7zzq41",
        "badges": [],
        "createdAt": 1769438898184,
        "displayName": "Peft Fine Tuning",
        "latestVersionId": "k9787xem1w36zcv8nz7b0qkh2x7zyd2h",
        "ownerUserId": "kn78psy69jtnzswj9trx5zknps7zzh0p",
        "slug": "peft",
        "stats": {
            "comments": 0,
            "downloads": 1839,
            "installsAllTime": 5,
            "installsCurrent": 5,
            "stars": 1,
            "versions": 1
        },
        "summary": "Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.",
        "tags": {
            "latest": "k9787xem1w36zcv8nz7b0qkh2x7zyd2h"
        },
        "updatedAt": 1772063136179
    }
}