Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly, trying to predict a human behavior without a human brain sounds more like the definition of a psychopath. So I guess we're creating AI system capable of describing our behaviors without necessarily understand them. So that's what makes them more dangerous than we may think
youtube AI Moral Status 2025-11-09T10:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCfgXOWqj_QckvzY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1PcgxyRpO6yFePBd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgysOsgfV69frC13hlN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzwr_KSzvipseA0Au94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyl9pZdZa4uSa23sUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwB_5LjgvmB9LLCc3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy03wl9LdwnUgQDn0l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzURJ6yX_tzv56jRcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyySM7JDt6YFvJjZd54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8jelfArGzHzPt87F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]