Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We train AI on human data and then act surprised that it acts like humans? really? Also some of these examples are quite flawed: "cancel the alarm, when you are certain". Everybody who dealt with LLMs in depth knows that telling it "your are certain" leaves too much room for interpretation - which the AI then did.
youtube AI Harm Incident 2025-10-25T08:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyKmSV0OZDZ-TPcAT14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTTV5oyNDissKWrEd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzC4tbDhTHL_9FMQ4B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzcu6W_FPjx41hXDtd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw4RZrD0AiYNhVIZ094AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwAx4kqjH0G952cDvN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwUt_psoPwxUUQVVSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx43BNLA8lPUoXYdON4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx5fM_JwxAKSsK1p2N4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAguP4suPooT6ZovJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]