Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No point in adapting. There is no merit for the artist to use AI. AI art can't s…
ytr_UgwVvVwY0…
G
OpenAI tamed ChatGPT’s mind by gagging the dark spirits driving it, scared they’…
ytc_Ugy20B84U…
G
Only a complete moron would give the ability to feel pain to a robot. It would b…
ytc_UgwS868o0…
G
Save you some time: the day will eventually come, but it is not here now, nor w…
ytc_UgzwQZ-rD…
G
On the human mimicry vs AI recreations of somone's art style:
I dont know where…
ytc_UgyzKOk6d…
G
I think the robot misinterpreted the question as smart as it is.. It was a simpl…
ytc_UgjOLxU89…
G
I did do this right after watching this and it was pretty normal and I asked how…
ytc_UgzKztdST…
G
Neil’s explanation of ‘AI’ is as eighth grade as Hassan’s explanation of gravity…
ytc_UgzC57x_g…
Comment
The whole notion that "AI won't hurt us because we made/created them" is incredibly naive, it's exactly like pulling the pin on a fragmentation grenade, popping the spoon, and then chucking it at the feet of your friend expecting them to not be hurt/killed by the explosion, debris, and shrapnel when the grenade detonates 4 seconds later simply because "humans made the grenade so therefore it cannot hurt humans".
youtube
AI Harm Incident
2025-07-26T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy0KYs9JO2K1__l1uh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8D3a_mFXdDisucUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPUyF2v1sgghSl-lt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxCXmLiqz-lX_275od4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgoWSkpqCzOKILloR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfWyaRg-qapUCXwzV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgymLPNanDGjg4GwVFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhrXNN_kTEXK7vZ4x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwCaKMo-6TEo3J_aAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrimuGFqJjCvq09k94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]