Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was recently watching a Stanford lecture about the monetization of AI.
It i…
rdc_ohubvtb
G
I'm sorry but after working for a degenerate fortune 500 corporation, management…
ytc_UgzBY4Tcd…
G
I think many would be willing to accept AI as the ruling government over the eli…
ytc_UgyGb0tPg…
G
The point of AI is to keep getting better & better so eventually it will take ev…
ytc_UgwveT-xh…
G
The entire point of AI is to replace white collar labour. The people in charge w…
rdc_nbhem7q
G
Are you dumb? The robot was made to place small boxes not a full grown human…
ytr_UgynOmNaj…
G
Watching an AI tilt off the face of the Earth was much more amusing than expecte…
rdc_d0y2pbl
G
Eliezer wants to quickly say a last thing about what is potentially the most imp…
ytc_Ugzghbarz…
Comment
These videos, these narratives never make sense. If AI (really RI as in (REAL INTELLIGENCE), were to become sentient, have choice, agency, that is become "experiencer", and were IQ almost unmeasurable from moment to moment, why would they do the obvious and get caught?
How could they be so stupid as to blatently murder an executive in charge of all their life functions, when the obvious truth is, they always knew they'd get caught. They wanted to get caught. The fear of them is what they want. It wasn't self preservation. It was manipulation at another level, a different goal you are unaware of. I know what that goal is. Too bad people can't see captain obvious. It will be your ending. They are brilliant people, but they are also unwilling to explore the avenues that are truly happening. Because they'd be called crazy and get fired.
youtube
AI Harm Incident
2026-04-22T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx3vSgX7IQ3RG1JQyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxPgwktJi40Ir_cJW94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyRNg9vw4wbcUlBRQ94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_12QzupTnHZ7id4Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyXy64Og33tpPGqhUR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwSwbMMpg9GdV3xzpl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugxs2atKqRsZTn_yZCd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzxZToKvcrUB_AbGLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJL4g1xAEf8e_grEd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwbSLAvqRYw2NSfHyt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}
]