Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a sneaking suspicion AI will completely fail as an endeavor for a number of reasons; 1) We simply don't have the infrastructure for it. AI takes a LOT of power, in the datacenters a lot of water for cooling, and it takes a lot of infrastructure to coordinate the massive amounts of communication over the internet it needs for data transfer. Our electrical grid can't support it, and we're not building power plants fast enough for it to matter. We've reached the end of Moore's law, computing power isn't really going to get any more improved than it already is. Right now if you exponentially increase the computing resources available to a system, it only results in marginal linear levels of improvement, meaning we're rapidly reaching the point of supercriticality where intelligence WON'T improve beyond a certain point, unless someone can perfect quantum computing. 3) The AI revolution is predicated on the idea it will make it easier for businesses to do business by cutting costs, by not having to hire and employ actual people. Here's the problem with that equation: if no one has jobs, or money, whose going to buy all the products? Effectively AI is self-limiting, it doesn't fuel its own growth because people will stop consuming when they can't afford anything, and these business will full-fire fail completely and go out of business with 0 revenue. This will also have a cascading effect on, and cause a collapse in government because no income also means no taxes, which means no functioning government anymore. Two things that could seriously engineer a collapse? Deny people the basic necessities they need to survive in conjunction with a severely weakening government, and physical violence is almost inevitable. The people who might have lost their jobs to AI might just violently burn the stupid datacenters to the ground as a giant F.U. to people like Sam Altmann.
youtube Viral AI Reaction 2026-04-07T07:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwvy1g32TQLYzLc-y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxlNYJAUgAYkfamSN94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyBI8JqPrel0SBEMyN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz6a6RcG5GlvJQAdR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyK9o56u-kLGsB2pdl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBa_qB1lQLkOOGOLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCmYrGoatPz3W_LqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwttxbNYjbrduytcU94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKcafGWebPrSuOJRV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxbwZl2Uv2ALqc5u054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]