Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if an AI does something "immoral" while knowing that it's wrong then it's a problem. if it's just doing the easiest thing to gain power in a business simulation then that's another thing. even humans have to be taught what's right or wrong. many children will simply take what they want without caring about others. are they a sociopath or have they not be taught yet?
youtube AI Harm Incident 2025-09-11T12:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgygKdHg4I_jUOAI63d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzzgQh1g2EakuTMLVp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyAO5hbPupF_MBKBa54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz5d4DkjhrKrX1AvUB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKxt0nltOg8YSxubd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMNEDLfrWNTNuiDpl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNeusTsPIQNIQmPrZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwdddyPARWzW78DQ2Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyp0rFaec5zKMkA-id4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxdSqNbNReT-of0fWR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]