Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wouldn't bet on it. If we're doing full out AI, there's no reason every bot wouldn't be able to repair itself/others. If you're gonna give a bot that sort of access, your hopes of hiding a kill switch are slim.
reddit AI Governance 1438004674.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cthp6df","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_cthqads","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_cthprwp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_cths9iq","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"rdc_cthsduv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]