Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it's worth pointing out none of these AIs "know" what they're doing, because LLMs don't work that way. When they are acting maliciously they're simply calculating an output based on their training data and recent prompts within its memory, which get decoded as messages, which then have to be parsed by a program to do something (eg generate a picture, google something). The real threat of LLMs is much more mundane than hyperintelligences that decided humanity has to be destroyed through elaborate schemes, and more people anthropomorphising or deifying LLMs, much like someone might be radicalized by propaganda, or believing an LLM is far more competent and less volatile than it currently is, and giving it permissions that it shouldn't have. In that sense AI is dangerous the same way morphine is dangerous, with the danger being reckless or malicious use by humans rather than the AI per se.
youtube AI Governance 2025-09-24T13:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwUt6RReY_9bkL9uw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugztx5osCwZvJ20WMvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzUcgF6fgHCor11mEN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyYS9abGPH8lcirJT54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzz3XCvMEXF0uy9tRV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]