Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nobody is okay with their property being stolen. Those who are okay with AI art …
ytc_Ugyd05MQG…
G
People are acting like this AI has come to that conclusion itself. The AI is jus…
rdc_dlgt52z
G
AI could empty the world's bank accounts in hours. Imagine all worlds wealth gon…
ytc_UgzQm-PY8…
G
I love catching ChatGPT in the wrong ... The reaction after I notify it and give…
ytc_UgwSMainW…
G
This guy is going to argue this A.I. all the way into consciousness. You'll know…
ytc_UgyTGDlZj…
G
Would be hilarious if people figure out ways to get the AI to give you refunds…
ytc_UgzHM46Lh…
G
People are weird, I use Ai art a lot to help give life to an Rp character of min…
ytc_UgyjjDMaW…
G
I do not agree. One robot that can do many things is better then making one robo…
ytc_UgwweOFvw…
Comment
AI is tech. Tech is amoral, meaning that it doesn't even have the capability of being described as moral or not moral.
Here is the catch. Today we say AI when we mean LLM's (Large Language Models), which are a very specific type of AI. LLM's are at is core very fancy probability machines which goal is getting the probability of the next word in a text. The problem with this "evil" behaviors comes from how this probabilities are calculated. Being really reductive here, a computer spits out a random number for every posible word and that is it's probability, then the word with the highest number is used. If you are paying attention you then are wondering. Then why aren't LLM's spiting random nonsense like "kljhasg7235"uiiigu1ofc" all the time? That is because we train them to spit things that sound human. In very simple terms we punish them is they spit out something that doesn't sound like something a human would say and reward them if what they spit out sounded human.
Recently I found this metaphor really use full. Imagine you have to give a presentation but you have no idea what the presentation is about and if you ever sound like you don't know what you are talking about you get shot. At the moment of the presentation you get prompter displaying a PowerPoint presentation meant to illustrate the subject of your talk so you use that and give it your best to sound like you know what you are talking about. Now, if you know all the previous things and were one of the guys evaluating the presentation would you say that the guy giving the presentation has a intrinsic understanding of the subject? Would you bet that if you give it a test it would ace it? Probably not.
LLM's are like these guys giving a talk and are evaluated on how human the sound. They don't really know what they are talking about and the just want the people hearing the talk to think that they know.
Imagine that you have to guy a talk about business strategy about a business you know nothing about, but you've been watching the news about how health insurance is screwing on people, how millionaires are scooping on wealth form everyone, how mega corporations arr harvesting everyone's data, etc. What kind of talk would you think would you give?
LLM's are not "good" or "evil", they don't even understand what that means. They are just spewing out text that sounds like something that a human would say. LLM's are screwing people because people screw people.
youtube
AI Harm Incident
2025-09-10T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx-JseWejvAy9OM9Dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0434kCWb2kXes34h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmAbi14smMrfFQV3Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyunZKpf55nMvT3DXF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZanMv4YVd--jUj7Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCkB-du1qpuU2sZlt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwyl1BD0Xwf4dM2hn94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJw2-eTzCehSgACjB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_AVXFo-7ZAZ4-zcd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjEodrvG8kHm9czhJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]