风险评分

41/100 (Medium)

OpenClaw: suspicious
VirusTotal: suspicious
StaticScan: unknown

Tandemn Tuna Skill

作者: Hetarth Chopra
Slug:tandemn-tuna
版本:0.0.1
更新时间:2026-03-24 12:31:16
风险信息

OpenClaw: suspicious

查看 OpenClaw 分析摘要(前 200 字预览)
The skill's declared requirements and install method partially match its purpose, but the SKILL.md expects provider credentials and environment variables that are not declared and there are minor meta...

[内容已截断]

VirusTotal: suspicious VT 报告

静态扫描: unknown

README

README 未提供

文件列表

无文件信息

下载
下载官方 ZIP
原始 JSON 数据
{
    "latestVersion": {
        "_creationTime": 1771886947306,
        "_id": "k97dbfjpxsn96ac4pr0r843ann81pdje",
        "changelog": "Initial release of tandemn-tuna — deploy and manage LLMs with serverless and spot GPU support.\n\n- Deploy HuggingFace models (Llama, Qwen, Mistral, DeepSeek, Gemma, etc.) to GPUs on Modal, RunPod, Cerebrium, Cloud Run, Baseten, and Azure with optional spot fallback.\n- OpenAI-compatible inference endpoint for every deployment.\n- Hybrid serverless + spot orchestration for cost savings and zero downtime.\n- Built-in commands for GPU price comparison, deployment management, status, and cost dashboard.\n- Provider setup guides for quick onboarding across all supported clouds.",
        "changelogSource": "auto",
        "createdAt": 1771886947306,
        "parsed": {
            "clawdis": {
                "emoji": "🐟",
                "homepage": "https:\/\/github.com\/Tandemn-Labs\/tandemn-tuna",
                "install": [
                    {
                        "bins": [
                            "tuna"
                        ],
                        "kind": "uv",
                        "package": "tandemn-tuna"
                    }
                ],
                "requires": {
                    "anyBins": [
                        "aws",
                        "az"
                    ],
                    "bins": [
                        "uv"
                    ]
                }
            }
        },
        "version": "0.0.1"
    },
    "owner": {
        "_creationTime": 0,
        "_id": "publishers:missing",
        "displayName": "Hetarth Chopra",
        "handle": "choprahetarth",
        "image": "https:\/\/avatars.githubusercontent.com\/u\/34271010?v=4",
        "kind": "user",
        "linkedUserId": "kn7esackgc5h3kf78x3xcwr55981qnx9"
    },
    "ownerHandle": "choprahetarth",
    "skill": {
        "_creationTime": 1771886947306,
        "_id": "kd7602t2gkq2f78bh993rex9an81q4yy",
        "badges": [],
        "createdAt": 1771886947306,
        "displayName": "Tandemn Tuna Skill",
        "latestVersionId": "k97dbfjpxsn96ac4pr0r843ann81pdje",
        "ownerUserId": "kn7esackgc5h3kf78x3xcwr55981qnx9",
        "slug": "tandemn-tuna",
        "stats": {
            "comments": 0,
            "downloads": 353,
            "installsAllTime": 0,
            "installsCurrent": 0,
            "stars": 0,
            "versions": 1
        },
        "summary": "Deploy and serve LLM models on GPU. Compare GPU pricing. Launch vLLM on Modal, RunPod, Cerebrium, Cloud Run, Baseten, or Azure with spot instance fallback. O...",
        "tags": {
            "latest": "k97dbfjpxsn96ac4pr0r843ann81pdje"
        },
        "updatedAt": 1774326676275
    }
}