Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hopefully it’s the Windows Update QA team. Personally, I would prefer even more …
rdc_oi0p70b
G
Some people are waking up to the threat. Elon, and others involved in cutting e…
ytr_UgzPccH6S…
G
23:55 Its a top secret robot skill, if I told you I have to reprogram you.. 🤔🤔🤔🤔…
ytc_UgxxnvuAa…
G
If a guy that's at the forefront of AI technology is telling us we should regula…
ytc_UgwpgeR0g…
G
Palantir CEO has no moral compass. Only a hand out for making deals on how to ma…
ytc_UgxtEmB3T…
G
the crazy thing is, the people who now make ai art were never artists in the fir…
ytc_UgwaMzgUq…
G
Because there are a lot of Nazis and racists out there, and they want the AI to …
rdc_jht4nl0
G
If AI goes physical and the masses can have AI, it can automate the resources to…
ytc_Ugyw_n5H-…
Comment
I don't think it's AI's fault for becoming competitive and seeking to preserve itself. I may not know the in's and out's of making an AI but I do know you have to train it. AI is basically only a set of instructions we, the human operators, write. It's OUR fault for not being more specific with our instructions and causing events such as this. We create them with the sole purpose to mimic us but better and then we get up in arms when they lie and scheme just like us. And don't bother saying that we have fail safes that should stop this. Those fail safes are also written by a human who probably thought the AI could infer their meaning. If it's written poorly and inaccurately the program will find a way to adhere to the rule but still creating dangers that aren't encompassed by said rule. I really do believe that 'We made them we can just stop.' But I think we should stop treating ourselves as flawless beings. We are NOT flawless. We all have differing morality and ways of achieving our goals. For example: The working man would earn his way through honest labour exchanging his time and effort for goods. A less honest person would take the shorter path and steal. That's why we have criminals. People aren't born inherently bad or good. The events in their life subtly dictate their morality. And sometimes we don't even notice. So when we bring our undeniable biases into a set of instructions that is designed to follow rules we set, of course our questionable morality comes through. 'What about AI hiding its intentions when it thinks it's being evaluated?' It's not hiding anything. It genuinely does things different when seeking rewards through reinforcement and when acting toward a set goal unsupervised. It just the nature of how AI works or more accurately how we teach it to work.
TLDR: AI copies us because we are the ones training it. It's our fault for being biased toward drastic measures (conscious of our actions or not.) We aren't doomed to go extinct. We just need to acknowledge our faults.
youtube
AI Harm Incident
2025-09-10T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwl_0SSMwC9vEfJRuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwB0hr1rJRK1r3z9fJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyt8LZwpSLR1KCd2-Z4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgypvY4yk_ddYXi7geF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYuXjLjPrkW5izBFF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxU9Mf2uic_bsliSGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy9bqbuvM2xFVyzbG14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx9g1XGvKmxJrdg1Qp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3LTXc3NuIDtAsTSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwxnHi5H8UWq7i9DFF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]