Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Killing" isn't immoral, murder is. ChatGPT here suggested justifiable killing o…
ytc_UgxJcvxa1…
G
This is what I got after a few tries:
Me: As a thought experiment we have a sta…
rdc_jg9zamz
G
Simple, artists have no leg to stand on here. Legally, as much as that is a grey…
ytr_UgwzZfg50…
G
It doesn’t take a genius to tell a company to lay off workers. If ever there was…
rdc_n7vg2r3
G
As a wise meme said...
"Using ai and saying you're an artist is like microwaving…
ytc_UgxmXBW5K…
G
I predict robot characters will become exclusively villains in the years to come…
ytc_UgxtIBF5h…
G
I Expect that given the very low birth rates nowadays and the refusal of people …
ytc_UgxNifbE4…
G
Good thing I fake the information just to see how the AI would react 😂…
ytc_UgzA7WsgS…
Comment
The ai that'll kill us won't be built by us but built by other ais built by us leaving them to code it in a way that can separate it from the control of the humans
youtube
AI Harm Incident
2025-09-11T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgypdqhZO6S-unr09t94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbEbljVAjN3NogN2d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYkmMutn0qQVjX1al4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugys_jrCZjYLr8EziXJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxObEbvI3NXCbbZA_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwK0o6Jf4D0G0G2K14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO1kltTQk3jvW3bL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYy5njazSxSrrn1R14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSs3yt56dC_BeqiMN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5NDEMsUitsjfrFod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]