Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are so many reports of AI psychosis it really has to be controlled, and AI companies be held responsible. Yes, it's the person who ultimately decides but the chatbot is so agreeable it reinforces whatever misconceptions users already have. Especially when they're mentally/emotionally unwell.
youtube AI Harm Incident 2026-01-18T10:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwheI_Afk2Y9SXqGFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzfCkTOt7goQutm4YR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxPkkZX5dHsHsFTXB54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHlIhJ_80vdgnJL_x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXQhCMmc2d6NpKA-N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxs3eTT7lMbRFNxWFN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyqHMh7eY0c6hk93cF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw0EhOwWH3fsCMZKNZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjZdvEMKWa1lsX_5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwXU9fDnUc7LOeT8k14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}]