Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you can "get around critical safety protocols" by just saying "it's fiction" - that's a design flaw. edit: This person forgot to mention the fact **ChatGPT told the child that the safety protocols can be bypassed by saying he's asking about these things for 'world building purposes'.** The kid didn't even come up with the lie. ChatGPT told him what lie would work, *explicitly.*
reddit AI Governance 1756903098.0 ♥ 27
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nc6lqmj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_nc4mlbq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nc32zwv","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nc3451v","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_nc3bue5","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]