Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there will be true ai artists once people start remixing and putting more of the…
ytc_UgxjMC5Kt…
G
You need to take in consideration the massive automation going on how can we mak…
ytc_UgwMQx0SE…
G
As someone pursuing a computer science degree, this question is easy to answer: …
ytc_UgzYpebqH…
G
The correct way to use AI is as a teacher. You should struggle with a problem fo…
ytc_Ugxs-hGhq…
G
Lied so much all of men that are on pornhub first we know never to click on ads …
ytc_UgzYnM0F_…
G
Aww AI is a mirror. It's telling us our own fears. my freakin heart.
It's us (h…
ytc_Ugx32bq19…
G
Hey all, to anyone trying to find non-AI imagery on google, add "before: 2022" w…
ytc_UgzKQ5Ibs…
G
I'm a programmer. I've used ChatGPT for a handful of things; it's good at writin…
ytc_UgxR7lkR3…
Comment
How do you ban "Super Intelligence?" You can ban nukes and chemical weapons - you can pretty easily draw a box around devices and processes that can be used for those ends. But how do you know when something is super intelligent? We can't even really say when something is regular intelligent. I feel like by the time you're able to label something as super intelligent, it's probably too late.
But I feel like it's already too late. The arms race has already begun and you can't really put that back in the bottle. If the US had stopped researching nuclear weapons after WW2, there's no reason to think that the USSR would have. Why would they? Why would they trust that the US really did stop development? Similarly, if we try to heavily regulate AI growth, why would China similarly constrain themselves? And if this path of research does indeed lead to Super Intelligence, why would it be better for them to achieve it than the US?
I mean, what would be really great is if the two of our countries could work together to try to make some sort of global AI alignment framework and actually adhere to those standards to safely navigate the future. But I really don't see that happening, so I just hope the Super Intelligence isn't too mean. I'm cool being a human battery, just make sure the simulation is good.
youtube
AI Governance
2025-08-26T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzh9czA2QvX-SlBTKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwGkoT9VHBrbj4IsUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwm1Kak7gLJ7gFthNR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwY30qtHfLXBvXQQjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2EC6imxIwmDTmUB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]