Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think we'll achieve SAI anytime soon. Development is accelerating rapidly, and society is very complacent, but we'll reach a wall in electronic computing. Currently, we're at a "jet engine on a paper airplane" stage: it flies very fast, but it won't break the sound barrier like a real jet plane. To achieve SAI, we'll need to master biological computing or quantum computing, but quantum computing isn't very promising because it won't be very efficient even after full development. I quantum-based SAI can end up being as disappointing as Deep Thought from The Hitchhiker's Guide, and biological-based AGI will end up like Matrix, but with artificial brains. In any case, there's no point in going that far, and the current "paper airplane" won't sustain the hype enough to get there. The account hasn't been adding up for a long time, and the bubble won't sustain itself until we master biocomputing from scratch. Any tool can be used for evil if the active user so desires, but regardless of intent, the misuse of AI will ultimately create a huge gap in education and in the young, inexperienced workforce. When the bubble bursts, we will be at the mercy of uninformed, complacent, or negligent congressmen, or desperate, greedy, complacent, or negligent CEOs. If the "Minerva vote" ends up cutting off access for the masses abruptly instead of gradually, we will end up in a huge crisis trying to fill that gap, and people will undoubtedly suffer. That's why we must pause AI as quickly as possible.
youtube AI Moral Status 2025-11-04T03:3… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzdD362N-69jb_GqO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFKzdZ6IS3bSjeDGB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwK8vNHvAAC4qgyPZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyi9ZyCrLQY6-3cWCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHDlDtpu7Dv0PEtkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz8TKA8OgiK9y0qax14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMJI7gRBEnkFgn6JB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwcNk_cuVklAe_4VVp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGrgrKNaUKIJiZ74l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwUMsFWYfQOUsLfRIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]