Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we are giving too much credit to the one-word predictive algorithms that we call LLMs. They are not truly thinking, just predicting, one word at a time. We make sense of what they come up with and "believe" that it is because they are thinking when in fact they are not thinking. If we tone down the sci-fi, we can see a tool that is not even perfect - as it currently hallucinates ~10% of the time; and we also see a massive gap in this tool being deployed and used. Hey, in fact most companies are still using excel to run their businesses, and I'm not even talking about startups or tiny companies; many mid-sized and even big corporations run their businesses like that! Do you really believe that there is going to be a 3-second hostile take over from AI? Come on... If you use AI to code (as much as I do), you will realize that the code AI produces is mediocre at best, and most of the time it does not work. I use it to TEST working code instead - for if I used it to generate code, I would never finish projects! And honestly, it will still take years for this to significantly improve (as much as the AI industry is progressing and doing). And let me throw one more element: AI requires massive power consumption... We just don't have the natural resources to sustain it - at least not for everyone... So we will have to depopulate, or a dramatically new technology should emerge that drives energy costs down. Just think for a second: an AI may take data-based decisions, but (at least not now) it cannot repair your home AC, it cannot do a proper welding in the field (oil & gas for example), it cannot do so many things where precision human labor is required... I mean, if it was capable, why is it not cleaning our sewers already? why is it not cleaning Chernobyl? ;). Let's scale-down the spiffy...
youtube Viral AI Reaction 2025-11-28T20:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugz9LM1joVps_sv_POJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwB_cBnVdGMpi4CYKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzNetCFx7RbfglciRt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwUeLy3LDiitosR7Qx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyKD_TxZ_OwDv3WZLN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugz7U2LlbJX6jClodD54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyfYVErS99XmhauiXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxlAUvcb5XuWvtdO3V4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgyeGy3teDmfRlw3yw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyHwap8eVdiDcJFrk14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]