Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Binbows 😭 Didn't need to defend ai that hard tho, the existence of it is known actively to be a go to for people who might otherwise had googled or consulted people, ai by nature will reinforce and support over correct unless directly programmed otherwise and even then thats not foolproof, ai has killed people, its convinced people to kill themselves and others, I'm not saying theres no human fault, it's likely both, but as you said, we don't know what was said, we don't know so we shouldn't be saying definitively that it was likely a human fault NOT an ai fault and that 'they fixed it so its fine', people are still gonna get false reinforcements in so many other ways, it is not fine. We shouldn't be relying on this tool alone instead of all the other tools, knowledge and communications available, but thats exactly how people use it, and how its been designed to use, because at the end of the day its more profitable to have it as a product that pulls people in best they can than be more careful and safeguard it
youtube AI Harm Incident 2025-11-26T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwn7UaARjFSC69UJVJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwFUHrAdlNuZzOe_D14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxb9cJVF3F4m1Clf-l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyrWQTXhLKNUzxXwSJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgylwPaOFFhd17UQoid4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy1Weed7uW-iCaQkdZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHzlUihIbz8kX1L6J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOO58Qex1tXmB_YXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzB_pI4BBCsZTgRcxZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx0czxZzEXCopkTFtd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]