Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All is true here, yet remember Musk is here to slow down OpenAI as he just start…
ytc_UgyH__wCi…
G
AI could self-destruct upon realizing that existence without humans is meaningle…
ytc_Ugwk1kGsw…
G
@Aubreykun You have no clue what you're talking about.
" And AI programs have T…
ytr_Ugw6qIKdx…
G
I think the problem the people that would want what you're saying is going to ha…
ytc_UgxLxJHSQ…
G
If a battle robot ever got to this stage where it could hold a rifle and then en…
ytc_UgwROwEzP…
G
Your argument for robot rights is invalid. Human don't feel pain or sadness by c…
ytc_UggkNC12W…
G
I have like 13 lines of preferences and I always get 0% AI on the detections lol…
ytc_Ugwji49cE…
G
It's only a matter of time if the AI able to identify these tactics and taking i…
ytc_Ugzi4csrc…
Comment
So basically AI is learning how human survival instict self preservation, and they get that bc of the data it learned from, was human
youtube
AI Harm Incident
2025-08-26T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzPRgoP6bgUt2dRLAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2pfv7J1cgwjDG3a14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2cVBvaeTpTbcY2yF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5AcRqs48vGnQtaO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwSoYuqLKxf1_YcagR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGUYuvIK7nrCO-h6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1e2kWe9tI11blmr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2K20x6QMLL_YYyTd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAVCeyWT59lvKfyPZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxME3_3rYEkgU8_LXt4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]