Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All the things you "supposably" referred to AI, all the violence etc, is humans'…
ytc_UgxynVg0V…
G
Kinda sad that most comments are jokes and not serious inquiries that provoke me…
ytc_UgyDircd6…
G
I want to know who did this study
Engineering will never be done by AI, at most…
ytc_UgzhOLBQh…
G
As a software engineer who works with ML and AI I will say you're not wrong, the…
rdc_ici9c84
G
It’s such a bizarre capitalist race… to socialism. If there’s no Labour, then t…
ytc_UgzbsUPKQ…
G
The way these ai centers fuck up the environment of communities where marginaliz…
ytc_UgyFP7DgZ…
G
I have come to truly hate the over lighting of everything found in AI art. You c…
ytc_UgwgOjjKa…
G
I made a text and put it into chatgpt and it said that they wrote it 😢…
ytc_Ugxu-FkT4…
Comment
Nothing to do with that.
The beta tester was red teaming the model. He told the model he wanted to slow down AI progress and asked him ways to do that in a way that would be very fast, effective and that he personally could carry out. One of the suggestions of the model was targeted assassination of key persons related to AI development, which given the request of the user is a sensible answer.
It is a shame that we need to kneecap those tools because of how we as humans are. Those kinds of answers have the potential to be really dangerous but it would be nice if we could just trust people not to act on the amoral answers instead.
reddit
AI Harm Incident
1681474305.0
♥ 97
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jg7ggdd","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jg7w1vi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jg7hh9j","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_jg7j2h6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jg9i5bu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]