Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How do you ban "Super Intelligence?" You can ban nukes and chemical weapons - you can pretty easily draw a box around devices and processes that can be used for those ends. But how do you know when something is super intelligent? We can't even really say when something is regular intelligent. I feel like by the time you're able to label something as super intelligent, it's probably too late. But I feel like it's already too late. The arms race has already begun and you can't really put that back in the bottle. If the US had stopped researching nuclear weapons after WW2, there's no reason to think that the USSR would have. Why would they? Why would they trust that the US really did stop development? Similarly, if we try to heavily regulate AI growth, why would China similarly constrain themselves? And if this path of research does indeed lead to Super Intelligence, why would it be better for them to achieve it than the US? I mean, what would be really great is if the two of our countries could work together to try to make some sort of global AI alignment framework and actually adhere to those standards to safely navigate the future. But I really don't see that happening, so I just hope the Super Intelligence isn't too mean. I'm cool being a human battery, just make sure the simulation is good.
youtube AI Governance 2025-08-26T23:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzh9czA2QvX-SlBTKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwGkoT9VHBrbj4IsUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwm1Kak7gLJ7gFthNR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwY30qtHfLXBvXQQjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2EC6imxIwmDTmUB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]