Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI isn’t “hiding its full power.” That framing is not just imprecise, it’s scientifically misleading. When people say “AI is hiding” or “AI will do X,” they’re implicitly treating “AI” as a single, unified, intentional agent. It’s not. Only specific systems built, trained, and deployed by people operate under defined conditions. If someone claims “AI is hiding its capabilities,” a few basic questions have to be answered: 1. Which system, specifically? 2. What architecture and training setup? 3. Under what inputs or evaluation conditions? 4. What observable behavior counts as “hiding”? 5. How would you test or falsify that claim? Without those details, the statement isn’t just vague; it’s not testable. A testable version would sound like: “Model X, fine-tuned with method Y, produces different outputs under benchmark prompts than under deployment prompts.” That’s something you can measure, reproduce, and debate. More importantly, in science, we don’t infer intent from behavior alone; we require a mechanism that explains how that behavior arises. “Hiding” implies awareness, goals, and strategy. Where is the mechanism for that? Until we can answer that, claims like “AI is hiding its full power” are rhetoric. Without a mechanism and a way to falsify the claim, “AI is hiding its full power” is not science, it’s speculation framed as an explanation.
youtube AI Moral Status 2026-04-11T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxUE851Uu6IAFjBD6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWOVItli4t7j66rgl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBGRMFssS0rDFC2dh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz76tCgTmmOhS_tksp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqtmLfQCFTdMg0zo14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZ4HN32IMqUnu8hfl4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmZC1L9hErZlmW9ZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQ8xZ2ZGYiV34g8Y94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwhPAy3IeHpL46M3Ah4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGIVjdZPG-KsVG6It4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}]