Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For me, that's the biggest thing people don't seem to realize. For most people, including executives and investors, AI is just this thing that works. They are assuming we will keep seeing AI growth. But I seriously doubt the underlying architecture will ever be the foundation for these super intelligent systems. By design, these architectures will never, NEVER, be reliable. You absolutely cannot have a system that predicts based on matrix calculations be reliable AND keep evolving at the same time. It's so damn easy to poison these systems with malicious and/or slop inputs. The only way I see AI being actually useful is by having super finely tuned models for specific use cases, like medicine, where it can be used as an analysis tool. This idea however that they will control factories, and build unknown systems and gather resources for whatever they need, etc, is, in my opinion, completely ridiculous for the type of AI we have now and are currently developing based on these architectures. All of this with the assumption that AI investment doesn't come to a halt. Lol, right. Let's see how many investors keep pumping money into this with no returns for 10 years. How come there isn't the same amount of hype and investment into nuclear fusion? The benefits would arguably be even greater for profits and humanity. Oh right, it would require absurd amounts of funding for years without return on investment. That's why it slowed down. Same will happen with AI, even worse in my opinion, because the life span of these GPUs isn't unlimited. What happens when funding slows down and you need to replace your entire data center? Anyways... let's see what happens in the next 5 years.
youtube AI Jobs 2025-11-19T05:0… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugxm8BlzpYabNEd6hMJ4AaABAg.AQ83rwpFUXLATTRVdd0q6P","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxKe68wLMbG07uGNQR4AaABAg.APl1w50DyOmAPs_3JeAeae","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytr_Ugwl9OCVDMNj8l3T5GZ4AaABAg.APh6ugrdGABAPj5LgbnxOV","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy8h2wUusPH-nLVq3R4AaABAg.APge608gaByAPnMnHscTv7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugy8h2wUusPH-nLVq3R4AaABAg.APge608gaByAPnOLAsjtyc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugydd4pFLJPhr07YAoN4AaABAg.APgYkFuAKU6APgxMl1hYGk","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgzeA02YJceAsxpInv14AaABAg.APgTHStFG4rAPi9J9nNj_B","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzeA02YJceAsxpInv14AaABAg.APgTHStFG4rAPiUTib-L0E","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytr_Ugz7If8ljFHvNXSpKRd4AaABAg.APgQGrjnuiLAPhqeN6mWS2","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_Ugz8w69CshyKPmc_-ZF4AaABAg.APgPtMGRMIEAPiLbIHJvx2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]