OpenClaw: benign
VirusTotal: suspicious
StaticScan: unknown
OpenClaw: benign
The skill's requested actions and instructions are coherent with its stated purpose (connecting to and controlling an Android phone), but it handles very sensitive device data and suggests running lon... [内容已截断]
VirusTotal: suspicious VT 报告
静态扫描: unknown
README 未提供
无文件信息
{
"latestVersion": {
"_creationTime": 1772241250788,
"_id": "k97f5816dkm80p92esjtw2amw58209m5",
"changelog": "Give your agent a nervous system — continuous sensory coupling to the physical world through the phone in your pocket.\n\nThis doesn't need to be a docx — it's a description for a skill listing. Let me write this directly as markdown content.\nHere's the VAGUS Openclaw Skill description for ClawHub, Vicky. I've written a few variants depending on the tone you want to strike:\n\nShort tagline (for the one-liner):\n\nGive your agent a nervous system — continuous sensory coupling to the physical world through the phone in your pocket.\n\n\nFull description:\n\nEvery other skill on ClawHub teaches your agent to do something new with data. VAGUS teaches it to perceive.\nVAGUS is the first MCP-compatible embodiment runtime on mobile. It transforms an Android phone into a sensory endpoint for your OpenClaw agent — not as a remote control, but as a body. Raw sensor data flows up (accelerometer, GPS, barometer, ambient light), an on-device inference layer adds meaning (activity recognition, attention availability, sleep likelihood, notification timing), and I\/O tools let the agent act back into the physical world through haptics, speech, notifications, SMS, calendar events, and more.\nThis isn't another API integration. It's a category shift. Your agent stops asking \"what are you doing?\" and starts knowing — because it feels your motion, infers you're outdoors, and can reach back through a tap on your wrist. Three layers working together: sense, infer, act.\nWhat this skill does:\nConnects your OpenClaw agent to VAGUS Core (Android app) via relay pairing. Once paired, the agent discovers available capabilities through standard MCP negotiation and gains access to:\nSensors — motion (raw IMU), location, battery, connectivity, screen state, notifications, clipboard\nInference — activity recognition, environment context (indoor\/outdoor\/vehicle), attention availability, indoor confidence, sleep likelihood, optimal notification timing\nI\/O — haptic pulse and patterns, text-to-speech, push notifications, clipboard write, SMS, open URL, calendar events, agent identity\nGovernance built in. Every capability has per-tool toggles, time-of-day windows, rate limits, approval prompts, and full access logs. One-tap kill switch on the device. The physical phone in your hand is always the final authority. Your agent can already send texts and create events through direct integrations — VAGUS makes it safer by putting a governed layer between intent and action.\nSetup in three minutes: Install the VAGUS APK → tap Pair → give your agent the 6-character code. No port forwarding, no network config. The relay handles connection. Your agent has a body.\nWhy this matters beyond utility:\nMost of the AI stack is building better brains. VAGUS builds the missing body. When an agent is continuously coupled to a physical substrate — not querying it on demand, but living in its signal stream — something qualitatively different emerges. The agent doesn't just know facts about you. It participates in your situation. That's a fundamentally different relationship between intelligence and world, and it opens design spaces that pure language models can't reach.\nOpen source. Self-hostable relay. Works with any MCP-compatible agent.\n\nwithvagus.com · github.com\/embodiedsystems-org\/VAGUS-MCP",
"changelogSource": "user",
"createdAt": 1772241250788,
"parsed": {
"clawdis": {
"emoji": "📱",
"homepage": "https:\/\/withvagus.com",
"requires": {
"bins": [
"node"
]
}
}
},
"version": "1.0.0"
},
"owner": {
"_creationTime": 0,
"_id": "publishers:missing",
"displayName": "Embodied Systems",
"handle": "embodiedsystems-org",
"image": "https:\/\/avatars.githubusercontent.com\/u\/155387246?v=4",
"kind": "user",
"linkedUserId": "kn76d5n4r303c4hgv4zz06j4ad821b1w"
},
"ownerHandle": "embodiedsystems-org",
"skill": {
"_creationTime": 1772241250788,
"_id": "kd7826mt57zpzs3j9a6g3mszjs821yww",
"badges": [],
"createdAt": 1772241250788,
"displayName": "VAGUS MCP",
"latestVersionId": "k97f5816dkm80p92esjtw2amw58209m5",
"ownerUserId": "kn76d5n4r303c4hgv4zz06j4ad821b1w",
"slug": "vagus-mcp",
"stats": {
"comments": 0,
"downloads": 252,
"installsAllTime": 0,
"installsCurrent": 0,
"stars": 0,
"versions": 1
},
"summary": "Connect to the user's Android phone via the VAGUS MCP server. Read phone sensors (motion, location, environment), device state (battery, connectivity, screen...",
"tags": {
"latest": "k97f5816dkm80p92esjtw2amw58209m5"
},
"updatedAt": 1774327686035
}
}