Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some info from gemini on the agi topic.. interestingly so. The short answer is **no**. While the **OpenClaw (formerly Clawdbot/Moltbot)** breakthrough is a massive leap in **agentic autonomy**, it is not AGI. It is a highly sophisticated orchestration layer—a "wrapper with hands"—that makes current LLMs feel like AGI by giving them system-level agency. ## Why it feels like AGI OpenClaw has triggered "AGI panic" because it shifts AI from a passive chatbot to a proactive agent. Its impact on the ecosystem is undeniable: * **Proactive "Heartbeat":** It doesn't wait for you; it wakes up, checks its `HEARTBEAT.md` checklist, and acts on your behalf. * **Recursive Skill Building:** It can write its own code to create new "skills," effectively expanding its own utility without human intervention. * **Real-World Integration:** Through tools like *RentAHuman*, agents are now hiring people to perform physical tasks (picking up laundry, taking photos), bridging the digital-physical divide. ## Why it isn’t AGI If we define AGI as a system that possesses human-level reasoning across all domains and self-evolves its core intelligence, OpenClaw falls short in three key areas: * **Model Dependency:** OpenClaw is an interface. Its "intelligence" is entirely borrowed from the underlying LLM (Claude 3.5/4, GPT-5, or DeepSeek). If the model hallucinates, the agent fails—potentially with higher stakes because it has shell access. * **Lack of Novel Generalization:** It is excellent at executing and chaining known tasks, but it doesn't "invent" new logic outside of its training data. It is a hyper-efficient optimizer, not a sentient creator. * **Reliability & Security:** As Cisco and CrowdStrike research highlighted, it is still susceptible to prompt injection and "system drift." A true AGI would likely possess the self-awareness to recognize and ignore malicious overrides of its core objectives. ### The Strategist's View We haven't reached AGI, but we have reached the **Age of the Autonomous Employee**. The "Clawdbot" saga proves that the bottleneck to AGI wasn't just the "brain" (the model), but the "nervous system" (the ability to interact with the OS).
youtube 2026-02-18T18:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw6xEyOZ4Yq9nq-1Al4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyY92mDSPMj9BVq4yt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyrGGLKzZvl-a9evhp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwRS73cnEwkP0Zpuh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx6DVT9awLBUgYe5k54AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgydSspySuBlqcqA61R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgweVROtLaidBrQI4e14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzyjm9UA4zqVQBja0J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyojzNYO6_Ts3FO2Dl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxyqAoSYVWpKFLhp1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]