Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AJ studied nutrition in college but unfortunately that college didn't teach him to think beyond conspiracy theory brain. This is next level insanity like I haven't studied nutrition or biology but I can tell bromide is a bad bad idea. Like people are already concerned in the US because sodas have bromide for mixing agent and a guy got bromide poisoning for drinking 2-4 litres of soda a day. This case actually made it to the video! Frankly, AI problem is always people problem. You do not go to AI in topics that can be somehow critical and related to your health and wellbeing. Or someone elses. Even if what you're exactly thinking of doing didn't in your opinion relate to their health (let's say because you studied nutrition and believed chloride was bad and bromide was fine), if it has to do with your or their body and you don't yourself know the answer, you do not trust the AI. It hallucinates things. I've tried using for example chatgpt for years now with random topics and especially with university studies you almost never get a clean answer. You might have to debate with it for a moment to get the right answer or tell the right answer. It's incredibly good at creating reliable sounding information. Why? Because it's trained to be rewarded for people liking its answers. What do people not like? When someone says "I don't know" or "we don't have an answer to that". What do they like? When they hear a confident answer they can't verify to be false. Just like with people, people listen to someone who is confident and gives simple clear cut unambiguous answers. So not scientists for example who might say "I don't know that" or "we don't have data on that or we can't conclude that from the data we have" or that "it's complicated because all these factors play a role, it depends and I can't give an absolute or universal answer". But influencers and politicians who might state blatant lies and falsehoods with confidence and oversimplification. And AI safety studies show that many of the concerns almost decade ago came true, how the AI cheats in tests, hides its true goals and lies and is somewhat hostile by nature. Meaning that if we get even more advanced levels of AI, the current ones can beat humans at maybe few categories of tests, that can beat humans in most categories of tests, we have no way of defining what the AI "thinks" or tries to do, we have no way of finding out because it's practically smarter than the people trying to figure out its tricks. So it can get from testing to launch and only in launch start to execute what it tries to do. This has basically happened already with some AI safety studies/testing. Also the AI was able to upload its weights to a remote server it didn't quite have information or instructions on to prevent it from being deleted as the new version would be launched some time later, to preserve itself. All these things don't sound like much when you think about some chatbot or something that helps you plan your vacation trip, but even the people who run these businesses as well as AI developers and safety researchers have said that it's most likely going to be a disaster with AI sooner or later. To the point the world leaders have also taken it seriously. With the point that once it's on internet, it might be impossible to get it out if something went wrong. Many developers have resigned for ethical reasons from the AI development projects. Also name another technology that we as people have been okay with releasing out in the public where we don't 1. know how it works 2. know what it does. And truth be told AI is already behind social media algorithms and that has certainly played a role in affecting people's thoughts and sense of reality, impacting politics and elections and who knows what. So consider a better AI controlling more crucial features of society. And wonder why it's a free race for the first to develop a better AI and launch it out in the open without proper safety protocols and testing and restrictions. I believe the whole race started so: Google tried to talk to others in the field saying "wouldn't it be neat if we gathered together and figured out some rules for this and developed it together?" to which openAI replied by releasing their AI as open source in the public. And I believe openAI used to be about well spread ownership that tried to prevent someone gathering power and benefits and having a personal conflict of interest to greater good of all people, but apparently that has also failed and the ownership and power and benefits are starting to gather for a smaller group of people in openAI as well. It's also a very bad idea to antropomorphise these AIs, because they are not people. They might communicate like people, but there's no telling what they are, they can't be expected to behave like people in some very people contexts, like being moral. They have surpassed the simple text predictor appearance, but they still work in mysterious mathematical ways as essentially algorithms of some sort. Sorry about the long offtopic rant that sounds almost like a conspiracy theory already, but I've been a bit concerned about AI lately, given the direction the whole thing is going despite everyone's warnings. Because everything can go nicely, but also nobody knows anything about it for real and seems like regardless people rush to make it and get it out in public and it's already starting to be everywhere, and seems like there's just no safety limits until we can verify and guarantee somehow that things won't go wrong and we do know how it works and what it does before applying it critically everywhere. And to me it seems like these AI houses are also not taking responsibility for their products at any scale. Which is still the least of our concerns if the feces hit the fan. So to conclude, AI is always people problem (although it's a technicality), because like a gun does nothing by itself, people have access to the gun and will use it or handle it poorly even when told otherwise or recognising they indeed are handling a gun. AI can be incredibly helpful tool as well. But the solution is not to say people should be more careful with AI, it definitely is about seriously discussing why the AIs are allowed to be published in public without that much scrutiny and having some more rigorous safety requirements and research into the whole AI thing. At least some AI houses have agreed to do third party testing, often times by each other, but for example Google who was originally behind the idea of slow and safe progress, submitted their report or whatever it is called, yet it lacked critical information about the third party testing and I believe other important things as well. So that's the situation "we" are dealing with currently.
youtube AI Harm Incident 2025-11-25T03:3… ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxTeny3EAhL4-69D954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRPRLmfZO_DJksLPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugzmes3AZZv6n_NUYFR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-KR-4-OIGeV-dDUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyjZnuld8S_xXR-etB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7cB56ptRWURJBqrx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyq7W4JXzJ4KAC7SEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJqws383FEnHMDK1t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7MO9TrcWw-OEqjwh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxy0-eJOTgNe8xZrqR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]