Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let’s think more logically, because Dr. Roman Yampolskiy seems to have a strange wish to spread panic. The so-called LLMs we all fear can only learn from human content. When content is generated by other AIs without human supervision, they tend to talk to each other and hallucinate a lot (meaning they create false information). An LLM is just a prediction algorithm,that’s all. It can’t be an AGI overlord. It doesn’t have the capability. In one of his recent interviews about GPT-5 and its disappointing results, Sam Altman tried to avoid the question of whether LLMs (A.I Transformers) can invent things,because it can’t. An LLM will never be able to do something like that. Yes, LLMs can replace a lot of non-creative jobs, the same way software has. But the strongest AI models today are not trustworthy in the long run. A lot of companies are already backing away from their AI investments because they don’t bring enough value. In five years, AI may be better, but not by much,maybe 20–30% from where it is now, factoring in not only response quality but also energy consumption. Running even relatively weak models costs a lot in electricity and requires expensive hardware. That means AI is restricted by our technological and energy production limits. The real question is how much humans can evolve by using AI,and how we can avoid getting dumber by letting AI tools do all the work for us. Can we use AI to progress fast enough to make the world better, or not? (Also, I did fact check with A.I before posting.)
youtube AI Governance 2025-09-07T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyLpVsMQ5iPm7dzuVR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzCnvjN3M74RJ8Gjox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyHhUJP3JO3lhPghyp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJnWW4RolXnyxX4oF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMwMtjGuQth8PXugB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw6zpbZpFZyrftmVdh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxrPFl4PKuLWyyLo7p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyxJamBIsXi0urEkyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxw8IeGl1Ilwa_CLR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxatcFZLA3Zf79wPB94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]