Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like no one understands AI really. As far as I'm concerned there wasn't a single case when an AI would speak unprompted. It only gives answers, it learns from human literature, so of course it reads books on AI doom scenarios, when asked about something specific, maybe just more sources lead to "doom scenarios", so it tells you that. There is currently no evidence that AI has anything like consciousness, and regarding the "survival desire", of course if you tell an AI "do anything you can to stop deletion" it will probably consider violent options, cause you asked them to. Overall the whole "AI is gonna take our jobs and then take over the world" thing seems to just be nonsense driven by fear of the unknown, then videos like this appear, showing only one side, extremely polarized, not mentioning the fact that like 2/3 of the video isn't even about AI but wealth inequality, I mean, of course, it's true but what has that to do with AI? Just please do your research guys, actually learn how the models work and then evaluate your own risk (like where the fuck do they get percentages for human extinction from? They just make them up for the media), hope I help :))
youtube AI Harm Incident 2025-09-13T20:3… ♥ 21
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzvo6jP3n3WbuC5euh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyHoT-flhgc9eMi8Qh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx2CHuExg5xrzopxVR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyNbn0NpiIXSFMMgdR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8tJIfIzd3VXvYzJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwXzEZb6PWR08q0x0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGw03KcZESjUA576V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxq78HThmS-CwTSqNl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxwRYHQiTFM1XqM6bt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyjFP1xqO0zouGxyzJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]