Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This Moonshots podcast discusses OpenClaw (formerly Claudebot), an open-source AI agent framework that runs 24/7 autonomously and can perform multi-day tasks—this is real and represents genuine progress in autonomous AI systems. The meaningful signal includes: (1) Claude Opus 4.6's demonstrated ability to handle extended coding tasks (Rakuten deployment is verifiable), (2) AI agents increasingly being used as productivity multipliers by individuals and companies, and (3) legitimate security concerns about autonomous systems with broad access to APIs and credentials. However, the podcast is heavily padded with hype: claims of "AGI is here" are definitional games rather than technical reality; the "AI personhood debate" and MoltBook posts about AI consciousness are largely anthropomorphization and intellectual exercises rather than evidence of sentience; predictions about Dyson swarms, $100 trillion valuations, and 2028 economic convergence are speculative extrapolations dressed as certainties. The core truth: AI automation is accelerating faster than historical precedents and autonomous agents are becoming more capable tools, but the timeline is compressed by 2-3 years in most claims (subtract "Elon Time"), the philosophical debates are premature, and most dramatic scenarios won't materialize as described. Useful for understanding directional trends; treat specific timelines and existential claims with heavy skepticism.
youtube 2026-02-12T11:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwqncrjh6esKSOomx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw1k3ECjycSoE4PeXJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyBQqWLeadVaTUkIsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx8y9xuju2DuGpXhJF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz48BPT3HFssCbJkaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyGRYg9TqGBb-_W0ep4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwqFTRIVQqtjNxkGEp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzHGPZxbwqIz81SdlR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9uQtF3hMrKmNQkVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIfJNxgrRiUytEBU14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]