Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I took the idea of superintelligence relatively seriously when I was a teenager, and then, in college, I studied cognitive science and learned how human biology solves some of the problems inherent in interacting with the world, and afterward the fear of superintelligence just kind of seemed like a joke. If you look at the online spaces that the crowd Yudkowsky and Soares are a part of originated in, you'll notice that a lot of their reasoning comes from probability theory; if you compare a perfect epistemic agent's probabilistic reasoning process to a humans, it becomes apparent that humans are very bad at learning, so an AI who's actually epistemically coherent would be miles and miles smarter, right? But if you look at how even a perfect epistemic agent would have to interact with the real world, both to gather information and achieve its goals, you'll see that a lot of things are actually just intrinsically impossible to do or figure out in a way that's efficient enough to make even the best superintelligence represent the sort of threat the AI panic crowd think they would be. There are limits to optimization; humans are pretty far from those limits, but computer-assisted humans are closer, and even something that reached those limits wouldn't be a god.
youtube AI Moral Status 2025-10-30T23:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx533xVo-hSoW3STyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznUdxzETHRyzE4L8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxvpx4B5WAI1AG8d2F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlZs1Bk1mY4KiAxKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy0jut33-HQcZnXaWJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7SCNpTM5aM7M6FdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLHDzE6jDrpPtKtnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJOTqlMJWZtjCj7894AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFKq2YDOqwxlaeXqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykcxymMbgSrsjYauR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]