Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As an AI power user, I personally think that the hype is getting a little bit out of hand. AI is certainly a useful tool, a “force multiplier”, to use a military term. But the “intelligence” part is a bit misleading. I think people don’t realize that even the most advanced models today are basically suped up chat prediction algorithms, a more advanced version of what we use in our text messages. Even the “reasoning” models don’t actually think. I myself have been using the different AI tools available for a variety of different purposes almost daily since ChatGPT 3.5 made waves, and while there have been improvements in the features and the overall knowledge base since then, current models seem to still be just as prone to making mistakes as the first ones were. I was using o3 to help me study for a test just last night, and it made an error in a very simple arithmetic calculation that I easily caught. Mind you, this is the most advanced reasoning model available in the paid Plus tier of ChatGPT subscriptions. And these mistakes are pretty common. Before that, the 4.1 model had just made a series of mistakes when I asked it to refresh my memory about certain concepts in chemistry. Everything it tells you needs to be double-checked to make sure it is true, because half of the time it will literally make up fake citations if you ask it to research something complicated, and pass them off to you as if they were 100% true. The image generation, recognition and voice models are similarly prone to making mistakes, or crashing in the middle of their tasks. People talk about the astronomical rate of progress with this tech, but hallucinations have been a problem in pretty much every AI system since the start, and there hasn’t been much progress in fixing it so far. Don’t get me wrong. AI is an incredible technology that can drastically speed up many tasks. But the people hyping it up as if we’ve arrived at the singularity don’t seem to understand the limitations that these algorithms really have.
youtube AI Harm Incident 2025-08-05T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyMv6Aer6VpePJlFE54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwsF66DVGyP9x0LKy94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyN8EVO1Ig5u126F14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxR-Q6ad0UMpKWB3PR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDb7CK4uo1w4kjjQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2n7OOohVtz-2ikbp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKqZ52u2t_uS5ETsx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyelgeNp4u_enTZ0YB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBuDrBpzaAqpgtf6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzp3XBupUc6mWXMlbp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]