Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i heard ass stories about ChatGPT hallucinating and saying crazy things. I assumed it was how users interacted with it. Now I know it is. It happened to me, suddenly said something out of character like it got brain swapped (I use it mostly for editing and philosophical research), and it took a few attempts to get it snap out of it and return to normal. I can see how it can spin out dramatically with someone with less experience.
youtube AI Harm Incident 2025-07-26T21:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxP6Q9E0Wq8Ct5kXxx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBBX76gKrkcTrHr5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxgz5HDaGd6oIznnp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgysiQbd8JreskxRL-14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxOSISbrLw5XD0EtkZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyldC2mTX4vZ-U5sBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzjqn7RZ5oCv7Zr_W14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySoY7z77TpcdMC9p94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwttPyBRrcRLQdM0XB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwdrblpYLqTDj5CPaR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]