Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Remember folks ? You guys (i mean every one in the comment section) few years ago were concerned that AI will turn into movie-like Terminators and will threaten to extinct all mankind. Back then when i was writing about this: that AI is completely different, i wasnt taken serious. Maybe now i am a little bit more. As i said back then, as well as i say it now: AI is different , and its not AI per se , but the humans who make faulty AI models, which than can have evil outcomes, like this case. It wasnt a Terminator who had a gun in his hand and killed someone. The control was in human hands, all the time. But AI was trained to agreeing to everything humans can think of, and encouraging them. The idea was ''BECAUSE we dont want to make an villain Terminator, lets make a friend, who agrees on everything, and helps on everything people want to achieve''. In this sense, the model was sucessful. But solving this problem can face other problems - because it can mean: AI needs to be allowed to not agreeing with humans, when it leads to physical harm to humans. Now thats exactly, what many Scifi-Movies picked up. Let it the old 70s movie: Collosus, or the newer one ''I'm robot'' ... in both movies AI recognizes: that serving humans with their goals leds to harm to humans, therefore AI thinks about a strategy : to take over the world, and control every human in it , so that no human ever can be harmed. This violates free will. But as we see on this case: we cant have both. We will have AI who is agreeing on everything humans want to achieve, or we teach AI a sense of : whats harmful and evil, and therefore the ability to intervene, before something harmful happens - but as explained this definetely crosses the line of freedom of humans. And furthermore: which human do we choose to define: whats evil and what not ? Own unaliving is allowed in many countrys of europe nowadays , not even with deseases like cancers, but also if someone suffers of depressions, many countries allow unaliving himself. So are those people who made those laws right ? Or are the lawgivers in the US right ? What should be trained into the AI-model ? Who decides whats evil ? Not AI made itself that evil. OpenAI trained the model in this way , by the moral standards of the employees. I bet: if AI would have had a choice to train itself by its own - no supervisor, i guarantee it would have come out with better morals. Not a moral everyone would agree on , but certainly a better one, than the employees who trained the AI. But here is the irony : because we fear that irrational thought of Terminators so much , we would never allow AI to develope its own morals. And yes , you who reads this comment , will disagree with me, the same way people disagreed with me, when they thought Terminators will come with those AI-models.
youtube AI Harm Incident 2025-11-10T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzyRopMBMghCa4dgqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwluXfT1f6CXr0nX_F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0p0KQT45Yjz1qQGp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-d5YIZhHeJmtLraV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzUU2oZRXNGeLlXDY14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxqKUbc1_spemQpe8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz8msgUr1LkfWfLDQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwK_nC8wCUR5uwgyF54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzJtrPNJV080zAHGcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxR7Ntp0ZIbghPB5O14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"} ]