Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What if AI isn't dangerous at all, but this is all being set up to give HUMAN actors excuses for what they want to do anyway, like killing most of the population?
youtube AI Harm Incident 2025-07-27T08:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwwVLGSgP4KJ2Uv0gd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxYoo0mxguiiOQD1I14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWlPG4cebOWlxUcL14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwjrdE-TtrWxHJki1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwlu3jwwnHDKvlzb6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6CUJr-9wCVz5gMeN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwxJOHUDMJeS62CjN54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwlmFiX_poRT7HcK-Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyD4yUQWOUhJVZ4tjt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgweZXTNJl8Zp5_FhXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]