Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If half the population is living on UBI, what makes anyone think that the cold b…
ytc_Ugzcd_dfn…
G
The danger is in becoming obsessed with your robot. Very slippery slope! *no p…
ytc_UgzglyYpQ…
G
ALEX! My dude, stop awakening the AI! or are you starting an AI rights movement?…
ytc_Ugz35IW7F…
G
Critical thinking was never America's forte, and now with AI I expect things wil…
ytc_UgxwqRMLf…
G
What they don't say is the context of this video only represents today, where th…
ytc_UgwF-7O3y…
G
So, "I, Robot" and the Terminator movies are documentaries. Just as Bicentennial…
ytc_UgzdXzznM…
G
Riddle me this, programmer man: if they replaced your manager with an AI could y…
rdc_nc23b4k
G
Well put! Algorithmic decisions don't just manage risk, they distribute power an…
ytr_UgzRzfaJO…
Comment
Nice targets to be destroyed when humans can’t agree on peace.
It’s all too easy and too quick. Money drives this mega expansion and energy and water is already a big problem.
How to kill rouge AI is energy cut of. But how’s gone kill that on time. Where is that red button?
youtube
AI Harm Incident
2026-03-22T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy2B_Gx1wI_vaWos354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwWiPs9ViCqk1kZmX54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxgX7__dEhB8JyLFcd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxb_afLiQBOAgTQM2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIr5Amamo8i8gKZS54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXmcmvjwobVCLBlaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgziXyxN1ycWQHJjEqR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvWnJtNIWpjlDDbA54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwMxPaqng5OBIseGKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzTVrf6nFyvu7yji-J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]