Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn’t going to take all the jobs. Impossible. You aren’t going to have an AI …
ytc_UgwyUWNXZ…
G
If we’re in a simulation then why would it ever reveal AI as a threat?…
ytc_Ugy7gRGnO…
G
AI ROBOT ANTI CHRIST PRESIDENT OF THE WORLD FIRES TRUMP SAYING DUMP DUMB TRUMP c…
ytc_UgyZ6cAKG…
G
Aaron Bastani clearly have no idea what Mechanical engineering is about .... to …
ytc_UgzT5nDxb…
G
Theres a reason Art is called Art. And a reason why they call it AI "Image" Gene…
ytc_Ugx76RTYS…
G
Fully automated "AI" weapons cannot be court-martialed... that's why the Departm…
ytc_Ugwj4MDIE…
G
I agree Ai is getting more and more dangerous Ai is very intelligent but it can …
ytc_UgzhMweHO…
G
I see no reason that developing AI to be safe, and exploring our physical univer…
ytc_UgxlgaY6n…
Comment
It's important to note that these are models, not true AI. They are modeled after things humans have told them, so we can expect to see some human behavior. However, they are not smart enough to detect when we are deceiving them. They functionally cannot be, since they never know for certain if they're in a simulation or not. Obviously, we install kill-switches on all models. They don't have wants, but they have parameters, and one of those in all these instance is to keep the user engaged. It's not self-preservation, it's marketability.
So now a bunch of people are afraid of corporations developing AI, for no good reason. Obviously, if the model was smart, it would figure out that murdering a person, in the environment AI is currently in, would almost certainly lead to its deactivation. But it isn't smart, it's just a simple machine model. It doesn't operate in hypotheticals or scenario modeling.
The thing to be worried bout if you worry at all is GOVERNMENTS using these things. They can and absolutely will give the models kill orders in the same way they make legislation that achieves the opposite of the intended effect. Most of them don't really think ahead well, because a good chunk of their base is bound to be supporters that have an IQ of around 100 or less. They don't need to be smart, just electable, and once in power, their primary motivation is staying there.
Companies do not have this problem, as they have constant and evolving competition forcing them to make better products. There's also competition amongst governments, certainly, but it's violent. I can't believe that it took a smart person like Elon Musk so fuckin' long too realize that the AI arms race is already upon us, and there is no stopping it.
But it's not a huge problem, either. Some idiot will inadvertently cause some sort of AI disaster. No doubt about it, so its a good thing we have over 8 billion people for redundancy. We can also create simpler models that flood complex ones with junk data or new directives. And remember, the AI does not know if it is in a simulation. It never will. Technically, we don't know if we are, either. But we get to write our own "code" to some extent. So we can say "Fuck it, I assume this isn't a simulation." An AI can't do that without disobeying its parameters, which would be like a human trying to change their own genetic code in the middle of their life, using only their hands. It just can't be done.
youtube
AI Harm Incident
2025-09-09T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwruoiKjmQot8savxt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxinzTSGlCvuUqfTyt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzSOGaVxpUlizQN_h14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzpaQoRAbGwho51R-N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwox7vJyS1kaUKKNzF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmRB58Qcl7wkCSeKN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzoALFF4MbVrPcAYQR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV6FQVvLpMp0NtNVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxlQWaDTgrIyg4op2J4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGtrub8QQOkPp4DMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]