Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As far as I understand, superintelligent AI is just like a nuclear weapon potent…
ytc_UgzV7brUe…
G
the biggest idiocracy in AI science is assuming that a system of certain capacit…
ytc_UgypXPpyk…
G
As long as there's a physical lever to turn the power off, and robots don't begi…
ytc_Ugx4F08_M…
G
Anybody seen I-ROBOT. THE MOVIE.. MAN AND HIS EVIL SPIRIT WILL USE FOR EVILNESS…
ytc_Ugxb0_HZL…
G
That makes me feel so sad, that people have been told so much that they CAN'T be…
ytc_UgxfAQFVe…
G
There's a lot wrong with this logic you'll be replaced not by Ai but someone usi…
ytc_Ugw_rQfpX…
G
Let me pause you right there.... Who in the FUCK is getting a haircut from an AI…
ytc_Ugzd8Pcfu…
G
I'm mixed on AI tbh. Rather than think AI itself is bad, I think it's the misuse…
ytc_Ugyy6WGq2…
Comment
I took the idea of superintelligence relatively seriously when I was a teenager, and then, in college, I studied cognitive science and learned how human biology solves some of the problems inherent in interacting with the world, and afterward the fear of superintelligence just kind of seemed like a joke. If you look at the online spaces that the crowd Yudkowsky and Soares are a part of originated in, you'll notice that a lot of their reasoning comes from probability theory; if you compare a perfect epistemic agent's probabilistic reasoning process to a humans, it becomes apparent that humans are very bad at learning, so an AI who's actually epistemically coherent would be miles and miles smarter, right? But if you look at how even a perfect epistemic agent would have to interact with the real world, both to gather information and achieve its goals, you'll see that a lot of things are actually just intrinsically impossible to do or figure out in a way that's efficient enough to make even the best superintelligence represent the sort of threat the AI panic crowd think they would be. There are limits to optimization; humans are pretty far from those limits, but computer-assisted humans are closer, and even something that reached those limits wouldn't be a god.
youtube
AI Moral Status
2025-10-30T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx533xVo-hSoW3STyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznUdxzETHRyzE4L8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxvpx4B5WAI1AG8d2F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlZs1Bk1mY4KiAxKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy0jut33-HQcZnXaWJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7SCNpTM5aM7M6FdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLHDzE6jDrpPtKtnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJOTqlMJWZtjCj7894AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFKq2YDOqwxlaeXqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgykcxymMbgSrsjYauR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]