Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a good take, if a bit optimistic about the growth of AI. One of the main assumptions in this scenario is that we will get to this 'general' AI with LLMs, and that's just not really going to happen. If we somehow find a fundamental architecture that can get us there, this is genuinely a worry. Most of the newer LLMs are actually incrementally better than previous generations, and most of their performance uplifts actually come from test-time compute (ie, giving them more time to think), and thus, we are resource-constrained on what we can do with AI. Even if we make the assumption that modern AI can become general and self-improving, running these models takes real, tangible resources, and we can already see the effects of this. The data centers consume fresh water and electricity at a higher rate than we can start growing production, and higher consumption is making these models more expensive to run. Eventually, you'll hit a threshold that would make these models perhaps close to the cost of having human workers to run. Even without reaching general AI, jobs will be lost, but unless we actually reach general AI in the same sort of way this video mentions, the current AI paradigm will simply be another technical breakthrough that makes it easier to focus on higher-level work (similar to previous technological revolutions). Love the videos!
youtube Viral AI Reaction 2025-11-23T03:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy9y4BiUG_5C0zq1o14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzlfRdPu5EPk7zvRsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDgAY1BXVHyKcs08p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfyToPlb2mVq7AjRV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxschw2PRQENfX8jOJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwNnetgKpFSgaC8frd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyX04XjIuUNIDZvE7h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzqRAvM0Yew13rxjtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwwK1W4V51XS_XII0V4AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzVSwECQxfzCP8D18F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]