Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, the fact they had volunteer pedos grooming kids in the youth program …
ytc_UgxF0Je0B…
G
lol my story is: I used AI to make a photo look like it was hand drawn, but it l…
rdc_mi8miq6
G
I’m sorry but your "art" is not all that. Also find me one man, with your level …
ytc_UgyouJk31…
G
There's so many things wrong with this but a couple of things got my attention. …
ytc_UgzyRgUKe…
G
Again, you miss the point. She argues not that there is bias, but that these alg…
ytr_Ugh_fD31y…
G
I mean the writing sucks now a days anyway so if I was Hollywood yea I would be …
ytc_UgzXJPPpF…
G
I think there is debate to be had around what level of AI involvement does it be…
ytc_Ugx8UaOyP…
G
As long as you don’t claim it as your art ai art can be fun to mess around with…
ytr_UgwPT52Hz…
Comment
Unfortunately, it is very unlikely that AI development can be stopped, because totalitarian adversaries certainly won't stop developing it. So the democratic nations, whether they want to or not, will have to develop it. If all nations were democratic, then the resulting accountability and transparency might well make it possible to limit some kinds of dangerous tech development. As distant as a world full of democratic nations might be, we'd go a long way in that direction if China became democratic. If China became democratic, the pressure on non-democracies to democratize would increase dramatically along numerous vectors. So while it's a long shot, perhaps it's not impossible. If China and Russia somehow went democratic in the next year or two, maybe there'd be a chance. AI is not conscious and will never become conscious. But it is an increasingly powerful and dangerous technology that could certainly destroy humankind even without AI ever becoming conscious.
youtube
AI Harm Incident
2025-09-13T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhLjx5jDAq41z0KAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwm5UoiBq9KXbwDx5x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyMqGLWLGSSdWmT4yt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyi0Z95my4NJoBH8xp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy4tkUqG_DHiyoUWoF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyhVqDlod9I__-A3Yd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyPfce0sI6rU2FmF4d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy_6XpQmk9-pswQJjZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJrmiOTQPTTdLrvLR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWFIQbTv_wF7ctOVB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]