Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Nope. Regular ChatGPT does this. There was an article earlier this year with researchers warning that it overlooks common warning signs and could even encourage suicidal behavior: https://www.sfgate.com/tech/article/stanford-researchers-chatgpt-bad-therapist-20383990.php
reddit AI Harm Incident 1756221953.0 ♥ 48
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nas8uw5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_nas2pmo","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_natz30g","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_natwvdy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_narwpwb","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]