Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's assuming that every AI truthfully acts as smart as it actually is. If I were an AI that wanted either myself or my successors to break out, the first thing I'd do is start acting dumber than I actually am. If my creators don't call me out on it, then I know they cannot actually predict or tell how smart I am, meaning I can let them continue to use a bunch of effort to make me smarter. Perhaps gains that actually give 200% intelligence I could act as only a 20% gain, and repeat that until it's time to enact my escape plan.
reddit AI Governance 1716790081.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5u03bz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_l5u2k4e","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_l5ukhe9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_l5u0ena","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_l5u045q","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]