Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These people are just riding a hype train. On one hand, you have those who go around, claiming the AI will be the end of us, and then you have those who call it a "glorified auto-complete". You can let these guys manipulate your mindset and indoctrinate your brain, or you can think for yourself. First of all, AI models (LLMs to be exact) are not intelligent; there's no trace of intelligence inside. That's not because they're not advanced enough yet, but because of the technology they're based on. Such a system could never give birth to intelligence. They're still a series of 1s and 0s after all. These people are just making good business decisions by manipulating stock market trends for their own financial benefit. They're capitalising on a far-fetched dream of AGIs. For AGIs to arise, a completely new approach would be required. You can push trillions of parameters into an LLM, and you would just make it better at what it already does: predicting the next word. I can't see how that ever evolves into an ultimate digital God. That doesn't mean AI isn't dangerous technology, but the real threat lies exclusively in its use. It's true that AI is more and more capable in a wide variety of tasks, and that it has already been used for all sorts of malicious activity. AI could harm us only if we allow it to; if we give it access to dangerous systems where wrong decisions could lead to casualties. AI is still highly prone to hallucination, and such a mistake could lead to catastrophic results. Unfortunately, all these AI companies and most powerful countries are rushing AI so hastily and irresponsibly, afraid not to be left behind in this race, that some sort of devastating mistake is eventually inevitable. So my point is: Don't be afraid of AI; be afraid of those who use it irresponsibly, for malicious purposes. After all, history repeats itself. This scenario already happened, some 60 years ago. If you seek answers, look there.
youtube AI Moral Status 2025-12-13T20:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwJS2CR_aIvc7ja-lF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyC9ej9iWRnuQUdyvZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwp0MwzPdjwhrDkj2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-Mfamrd4PXn8VJN94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugze9vpt7zHXnsF0rXd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxytHHQGE6IWauY0ep4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwEv6MB7z8k-AewoNR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwukw3gNYdLivdk-ol4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVJpzlEnTnu3or65p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzYLzzDAMWF14MDNF94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]