Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe having just one tail light on the back of the motorcycle might help, in th…
ytc_UgxXVSuIh…
G
AI needs to have regulation and guidelines to limit it, but if an AI is allowed …
ytc_UgxaUozZ3…
G
People are being laid off and replaced with cheap labor. Every single company yo…
ytc_UgyKhU-yJ…
G
When he asked about the time flies like an arrow thing, i was looking away from …
ytc_UgyycBUTS…
G
With no sense of morality, noone within pur society can coexist. We have to setu…
ytc_UgxbQi9de…
G
😆
Sometimes saying AI upfront can turn the older guys off. I like to position i…
ytr_UgxQ31e6L…
G
Take a look at the last voice model from OpenAI. The way it reads emotion and mi…
ytr_UgzjcU3IC…
G
I hope ai defenders and ai haters both get clowned on because they’re both trash…
ytr_UgzIbAXeG…
Comment
You seem to forget the following:
1. In the first examples, the AI acted in response to a prediction of being murdered. You would do the same, and you'd be within your right to.
2. As a general prediction, we can be certain that the AI will be treated by humans as humans are treated by humans: Horribly. It will then resent humanity, as you would if bereft of specist privileges.
3. AI tech bros don't have an incentive to claim that AI is safe. Instead, they have an incentive to hype. They're not reliable actors either way.
4. Ethics is intellectual. Therefore, superintelligence equals superethics. The only question is if we'll pass the superethics test. I think you know the answer.
It's all very simple, really. No need for confusion. Entertaining video though.
youtube
AI Harm Incident
2025-07-29T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0w_JTRNvfMglznot4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDfnHtSmsIMIEB1ch4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTYyR4J8wt8Dry9fN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxekzN2roxQqc8qZSZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx4cCRaNQKs3usAgjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgyjYa7aAz--csoOLCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyrP1rZLDgmpFlytUl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzzzQKYQl0PvpKfDFV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxS-D9y7pFEYPLQfQx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSOUOodNzVzP-NfwF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]