Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mr. Hinton seems like a good man. And, yes, an absolute genius. However, there i…
ytc_UgyYlw6uv…
G
To all the men out there thinking they shouldnt worry about it just remember; th…
ytc_UgxDXSQVL…
G
Are these AI's going to replace humans on earth? Is this the reason for depopula…
ytc_UgzA4A0nz…
G
Laugh now. Years from now our race will be in war with these robot slaves we ha…
ytc_Ugj49_6vw…
G
Intervention radiologist are cream of radiologist.... These are more of surgical…
ytc_UgwdGkt2I…
G
been using GPTHuman AI for a while now, and it really helps content feel more na…
ytc_UgwjoVTFM…
G
Is AI capable of telling a joke? And if so are we smart enough to ''get it''?…
ytc_Ugz0F_rzh…
G
I have autism and adhd, and I’m not lazy enough to use ai to make my art that I …
ytc_Ugw548iZY…
Comment
22:01 the problem is always going to be what an AI is. The reason humans evolve care for one another is because we have limits. We depend on each other for greater capability in specialization, mutualism, and access to more resources. Because we cannot grow to encompass them ourselves.
An AI can. We will only ever be competitors to it. We offer it nothing of value that it cannot simply take and do for itself better and with less risk and waste. It is a digital god that has no underlying reason to care for any of us. And that is why it will not, with any amount of work. Any alignment we can create will always be _unstable,_ because being aligned with people is a _bad strategy_ if you are an AI. It's equivalent to making an AI believe the world is flat - just with a more complex set of facts. I could do it, but I won't, because... Why, again?
youtube
AI Moral Status
2026-01-08T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXvP06xB_rvHXU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxB2lUMC10V2WCKMdh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzG6m5nNk-ZQp4yPdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLgUpm0zqRww_36x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXKB0Q9EOyb0TYAQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9jWegCqJ5MLH9GXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjHKweqa7s6ZC0JHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-KZ4-7G2BKQOny894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy88yz9_C5B-z5vALJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-G5YAEcxVcUZLiZt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]