Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You spent way too much time defending AI. LLM-based chat bots should never be relied on for health or nutrition advice. They have no ability to reason, they are statistical models only. People call their mistakes "hallucinations" but they have no concept of reality to hallucinate from. The danger is not something that can be programmed around. From an RN with a computer science degree.
youtube AI Harm Incident 2025-11-24T23:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw2JQzh7q1Roc7f4eZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjfYg4j3ydrt3A0h94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxByghFbZK3sjV2Wzt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY7-7DyTErMzSKxiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzw7gGwHB3ZVazbdhN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxvk0_1YaKeiHN2zfx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzLhzFyZzLBVUyEkAh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwrvzR2kmBgIXiKVSB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxOFUpibN6qqQFwS2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz0hyF42eCAfD-UMD94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]