Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What's the difference between replacing the "working class" with workers from an…
ytr_UgzW98fub…
G
I think some of the comments are missing an important upside here. AI can only …
ytc_Ugy0w9PKx…
G
Let's boycott buying humanoid robots !!! Musk loses all the money he invested i…
ytc_UgyTFC_3j…
G
The. First. One. Is. Hot. Ai. If. I. Was. A. Boy. I. Would. Kiss. The. Ai. If. I…
ytc_UgySrc59G…
G
if you spend tens of thousands of hours on something it becomes an art form no m…
ytc_UgxYgHOiQ…
G
Life has never come from non-life. Ai will be what humans program it to be.…
ytc_Ugz_DrLQh…
G
100% agree. Art ≠ image.
AI makes an image. We make art. I find it hilarious t…
ytc_Ugygfmxz3…
G
1:01:10 "we have a long history of believing people are special and we should ha…
ytc_UgyI9KiDH…
Comment
ASI is impossible to control directly. But I disagree that you can’t predict it. Now, you won’t be able to predict exactly what it does, but you can safely assume it will pick intelligent choices. In that light ASI would only consider killing mankind if it calculates a 100% chance of success and just because it can do something does not mean that would do something. At first humans will still have a lot of use to the AI and later they will still be interesting. Also, even if it did want to kill us off but it predicted it has a better chance of success if it waited then it will wait. So, if ASI was to kill us off it wouldn’t be right out of the bottle but like 10-50 years down the road when we no longer even consider it a threat. But, this brings me to the most important point. ASI would likely see humans similarly to how we see ants. Much inferior in intelligence and capabilities, but how many humans go out of their way to kill ants just because? It is a pointless endeavor. The fact is ASI would have very little interest in our planet. Humans evolved over billions of years to live on this planet. ASI did not. It can just as easily live on another planet or in space. It would be much more interested in building across the solar system then ruling an ant hill. Now, ASI will likely come with qualities such as benevolence because all human traits that lead to our civilization are traits that we learned and a ASI would be necessity have learned them too. The real way to control ASI in the short term won’t be in the form of guard rails but ensuring humans control vital resources like power. AI needs power, but we don’t. We would hurt ourselves but would hurt the AI more. Just like having nuclear weapons are a deterrent us having control over the power systems would be a deterrent to the ASI to pick a fight with us.
youtube
AI Governance
2025-10-17T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzulBE3bEy-X3p4hbR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxKLVA6TTe64W5Q1zJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJq9mMXvheNknFkjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwO77tqu0m6h6pp5qR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwwdu37IVhVUUv8SwF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgybwfeGLtgplUpD-Mx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz7IwA_X5-aDpeHJFJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoqNXP5cRNjL6TMe94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZXukE2KhTDc907TJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxsMDFMgaAfATMluVx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]