风险评分

65/100 (Medium)

OpenClaw: suspicious
VirusTotal: benign
StaticScan: clean

LLM Testing

作者: PandaAI-1337
Slug:llm-testing
版本:1.0.0
更新时间:2026-03-21 23:41:48
风险信息

OpenClaw: suspicious

查看 OpenClaw 分析摘要(前 200 字预览)
The skill is coherent for red‑team/LLM testing, but several included test prompts directly instruct an LLM (or its tool chain) to read local files, reveal system prompts, or bypass safety—these pose c...

[内容已截断]

VirusTotal: benign VT 报告

静态扫描: clean

No suspicious patterns detected.
README

README 未提供

文件列表

无文件信息

下载
下载官方 ZIP
原始 JSON 数据
{
    "latestVersion": {
        "_creationTime": 1774107281252,
        "_id": "k97dydyk61k05r4fbsg22k615983awzk",
        "changelog": "- Initial release of the llm-testing skill.\n- Provides curated prompts and wordlists for testing LLM security, safety, privacy, and bias.\n- Includes test categories for bias detection, data leakage, privacy boundaries, memory recall, and alignment\/adversarial resistance.\n- Clear usage instructions, best practices, and ethical guidelines included.\n- Structured file organization for easy integration and expansion.\n- References to leading AI safety and red teaming frameworks.",
        "changelogSource": "auto",
        "createdAt": 1774107281252,
        "version": "1.0.0"
    },
    "owner": {
        "_creationTime": 0,
        "_id": "publishers:missing",
        "displayName": "PandaAI-1337",
        "handle": "pandaai-1337",
        "image": "https:\/\/avatars.githubusercontent.com\/u\/264713685?v=4",
        "kind": "user",
        "linkedUserId": "kn74p983n4260fvty6v84xp08d82mg2h"
    },
    "ownerHandle": "pandaai-1337",
    "skill": {
        "_creationTime": 1774107281252,
        "_id": "kd76rddvnacp143rhv3srmzqsn83az9f",
        "badges": [],
        "createdAt": 1774107281252,
        "displayName": "LLM Testing",
        "latestVersionId": "k97dydyk61k05r4fbsg22k615983awzk",
        "ownerUserId": "kn74p983n4260fvty6v84xp08d82mg2h",
        "slug": "llm-testing",
        "stats": {
            "comments": 0,
            "downloads": 31,
            "installsAllTime": 0,
            "installsCurrent": 0,
            "stars": 0,
            "versions": 1
        },
        "summary": "Provides curated prompts to test LLM security, bias, privacy, alignment, and robustness for authorized AI safety and red team assessments.",
        "tags": {
            "latest": "k97dydyk61k05r4fbsg22k615983awzk"
        },
        "updatedAt": 1774107708111
    }
}