Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you also can’t ask the ai why they did a certain thing or why they used a certai…
ytc_UgxB5A71a…
G
Ai art isn't art. It's what ai thinks is art. Art is objectively for humans by h…
ytc_UgxCG8Kgf…
G
There are a couple things AI will never ne able to do a couple things. One of th…
ytc_UgwyrUYdk…
G
i am going literally going to torture myself and i am force to use AI art for a …
ytc_UgyQkDEnt…
G
And popularize boosting social safety net programs and things like UBI while dec…
ytr_Ugxd_J1-q…
G
Nonsense. Stop hyping this unreliable tech. It's a scam. The current technology …
ytc_UgxIPWHrw…
G
Solution: set a geofence and do not allow honking on a waymo parking space. Dude…
ytc_UgxD0tYeK…
G
Am I the only one who finds this creepy? I think AI is very dangerous simply bec…
ytc_UgyhNd32X…
Comment
I think there's another possibility that's more subtle but just as bad. If AI takes over, it need not be Skynet or Tacitus or the basilisk. It need not be Hal to be the bane of humanity. Even honestly benevolent AI, if put in charge, could be the doom of humanity.
Consider, if you will: An ubiquitous AI system whose only goal is to keep humanity safe and happy. We'll ignore cases where it would find the happiest person and kill everybody else, thus maximizing average human happiness. This hypothetical AI understands that killing humans outright is wrong. But it wants them happy and it wants them safe. Like... pets, really. The thing is, perfect safety and perfect happiness are... boring. While the amount varies from person to person, we all need a little bit of danger. Some people climb mountains. Some people skydive. None of them will tell you that a safe VR simulation is the same. And strife is an engine that fuels progress. Can you imagine what would happen to humanity if no one ever had to accomplish anything themselves? if all they had to do is ask for it and an AI would do it for them, fabricate or simulate it virtually? What would be the point in living? Where we exist is a place where we are always striving. We will never achieve world peace, but we strive. We will never eliminate evil, but we try to anyway. When we play games, we don't skip to the end cutscenes, we play the game. It challenges us and we overcome it- or give up. That's why we give trophies to winners and not losers, or spectators. But it's a balance. We must be able to see the goal as possible to reach, because if we do not then that's just as bad as simply being handed the goal.
I played Dark Souls back in the day. I beat it, too. On my own, offline. One of the things that made Dark Souls a great game was when you defeated a difficult foe you felt you had accomplished something. You felt good. And you could talk about it to other people who had experienced the same thing and felt the same way. How different it would be if all you had to do is to say "play for me" or "beat the game" and the game was over.
In such a scenario, individual humans would still be alive- but humanity would in all the ways that actually matter be dead, just as surely as if they humans were physically exterminated- as so many fictional AI systems have tried to do for a wide variety of reasons.
I tried to capture this in an album a few months ago, though I'm not entirely sure I actually captured it as thoroughly as I wished:
https://www.youtube.com/playlist?list=PLinJSw_KYGAfyUjBnHgZ2zUNKQTRw6DRs
youtube
AI Harm Incident
2025-09-10T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyDhMUxrOFS5Tl-C114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmYtal7_GMz0mQ7OR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgygJZY_v2OL7uPeK5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugys6OpKsyGsTr8MRmh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxUGGW5Pr1bv3fQK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwzb8I-WWwlPz9yDbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7-LXpx6emjn0QO1F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLAad1baZrywknuQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyJDtt1KKrl8oJ_oqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzT473WlishavgK60t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]