Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's so annoying when I read titles like "Inventor of AI warns of AI" because I never wanted any part of AI. AI is a total intrution on my life that I didn't sign up for at any time. As if I could do anything about it by being "warned by AI inventor?" I don't need to be warned. If anything these videos are only showing me specific people I should hate. Which doesn't help anything. It's like the mainstream news. What does it help to watch nonsense that reminds me of what I hate? The last people I would listen to "warning me" about anything are the very pricks who invented the stupid thing I never wanted anything to do with. Not that I'm going too spend even one second watching this video because I already know it's a waste of my time to watch it by the title alone. Clearly this is clip-baity nonsense. Now, if it was a video saying, "AI inventors and founders go to prison for life for creating AI"....I'd watch that video.
youtube 2026-02-06T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzHIX539Cu-TZ8cKFp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxdvGu88epw0ZqEZNR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzbSW-cSuHSCIA7Yrt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzewYhDFoE59O2HB7R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwVJdBB7DNwxpCPMHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzJ24O0ToKh1vOu7N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7YABpw2L4CeTnWqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwEK4g-NYgqa7xku_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwG-yf3ki3l-x6_aTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyz4nvME-KOBQ7kwCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]