Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If enough people get desperate enough, they'll destroy AI. Imagine a genius coder hacking some AI controlled aeroplanes and crashing them into the data centers like 9/11, or 10,000+ unemployed people storming the data centers like they did with the white house... The government will be forced to give UBI (or use fighter jets, which will end up destroying the country) But I don't think that'll happen though, because for AI to keep improving at this pace, they'll need far more data centers and even these huge corporations don't have the budget for it, nor does planet Earth have the resources for it (mainly electricity, second would be water). The more likely scenario (which I see happening already, as more and more people turn against AI) is the executives / managers who actually care about their employees will resign, and go start their own company. They'll take a few of their juniors with them. This time, they'll stay away from investors and pressure and scale slowly like the good old days before venture capitalists. They won't be able to compete on the scale of the giants, but locally they can succeed. Enough to provide a livelihood for everyone in the company. And if enough people do this, the ruthless corporations will be replaced by thousands of local businesses.
youtube Viral AI Reaction 2025-12-23T15:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzcegdDYsc_TO0p0AV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgztMshQNXzQHltSeqN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgysGKv9aI99pU2jBZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyOn_AE4y-2otHT0YB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugz3duN5Sw44YiWaeF14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"mixed"},{"id":"ytc_UgycdmirSgkVU90XSVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxPy9lyNuppRpe-s5h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxhfsO_eGlxEvh-_YJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgxmmcdpK3kG-zzAsul4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugzfx6Sajlif7cFVnTh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"})