Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You cant do anything about it. It would require all ai taken down OR all creatio…
ytc_Ugz0Bsqgt…
G
AI chat apps are TOOLS, not cure-alls!
They’re not supposed to “fix” you, they…
ytc_UgwEZCpqn…
G
I mean it really isn't a alien invasion if you're the one making it. and the ali…
ytc_Ugww3pnix…
G
The AI thingy sounds like an excuse to validate their decision to dictate what c…
ytc_UgwQx4TzY…
G
So.. I asked AI what it sees as the most likely outcome of AI once we hit AGI..
…
ytc_UgxfP1OCU…
G
@枒 AI art generates new images based on patterns from existing datasets, not by…
ytr_UgxwhejMn…
G
You hit the nail on the head man. I've been studying this for 4 years in undergr…
rdc_degk8m8
G
As a disabled person, ai slop isn't making anything better for disabled people. …
ytr_Ugximm40B…
Comment
This is pure clickbait with zero connection to reality. There's no evidence that AI has ever "killed" anyone on its own, and Hinton's words are taken completely out of context. Spreading fear like this only misleads people instead of focusing on real AI challenges.
youtube
AI Harm Incident
2025-07-24T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0Mz-Uq6nInHkizM94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzOwznDmrBuFAjfulh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzm6RXKt8PDHFsz_ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxgyFnLh83S5-YUg794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzR1v6VapLfErgCIJV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgBEl-n5Fc8QRw5Dt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-zZPSdKVlUAVEGo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYljjDBxPbDhtoQql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxjF5BPBcdNR_XaGIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKrcUr-tjxO6jnfYh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]