Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My worry is that AI doesn't even need to be smart to be dangerous; it just needs to be given sufficient power and influence. We've seen firsthand with Trump how dangerous stupidity can be. And with AI's know problems with hallucinations, what would happen if an AI missile detection system perceived a non-existent nuke, much like non-AI systems have done in our past.
youtube AI Governance 2025-09-09T16:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz3tEFaki6XWzJ96JR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDv50Na8mMhcp0ODF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxL3X0vUNFnfb2yOQx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_Ugx17s0Rwm5Httma3DR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwOOBnrFiZ5YkwekIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]