Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah right. AI will consume more energy than the whole of humanity by 2028 if it keeps growing computational needs like it does now just to make it a tiny fraction more accurate, which means it still halucinates if the knowledge queried for is not in its LLM already. That's what you claim will go places? AI does still not understand unless humans have told it what to understand, which makes AI about as reliable as a source of intelligence as humans are. All it does: It's faster, but less reliable because it has, contrary to humans, no concept of right and wrong or reality. This video goes against everything that is now emerging about the real problems of AI and this guy will be the only one wrong. AI can't grow anymore without destroying the whole industry because the costs outweigh the energy and computation power it needs. There is NO profit in this business model, because LLMs will NEVER be intelligent, they are stochastic parrots. The best indicator is the turning away of so many programmers from vibe coding, because the process is so flaud and so maintenance intensive that it by far outweighs the benefits of it. Or let's just take a good look at the MIT study that showed that only 5% of companies actually se a productive benefit from applying AI, and these companies by far and large are start-ups that are built on AI. And they will also not go any further because when the company grows, they will find themselves in the same spot as the 95% of other companies not seeing any productive gains from AI. Don't trust hype prophets from the very companies that need AI to succeed or lose a ton of money. They will lose it anyway, but you might not.
youtube AI Responsibility 2025-10-30T13:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzH89X6bUBv4wCZTgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqGjPNwJ6QA0Fz-4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxQO09TTCNTIZLiEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy4NUAksRfnApvRwk14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgypZknkEThfR3Qywtx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwOHQ0pyTPcxci4HiF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyB20yVDFKkceDmmgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzDRtwqT8lqpKvDHax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKB3YK0w4etax9s254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzprmYLK9plq4KKxah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]