Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If an Ai was smart enough to pose a legitimate threat, wouldn't it be smart enough to find ways to bypass it's kill switch?
reddit AI Governance 1716794362.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5u8vbr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_l5uqkbw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_l5v0j2u","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_l5u3nyt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_l5u5yis","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]