Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It funny that everyone things that AI it self is dangerous, but it is not, we humans are. If AI will ever kill somebody it will be human mistake. AI doesnt program itself, we does and if we not carefull enough and drop safety measures, beacouse to expensive or slows AI, then yes it might hurt us, but it will be our mistake. Just think about it, it wouldnt be the first time, There is nuclear energy, we can use it too generate electricity, but no, the first thing we done with it was to creater a weapon called atomic bomb to kill thousand and later millions of people at once. And Atomic bomb didnt create itself, we did. AI didnt create itslef, we did, if it does something wrong it is our responsibility.
youtube AI Harm Incident 2025-09-27T18:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwx3cvT-A90XGZeZ5V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsprldwftZC72r89d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzXjspqqtzNmtgU4bF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxtPGT9hNWyt4eAmD54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyutFvAqf50EfF72Vh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxHkeF9AaI5ORTZyt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQY9NWavWfZMNx5Al4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzoYhbQOYmCc-j6a0d4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw9AJ-L-w9TfVNx54d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEDNtSyH5iSzDdDs14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]