Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon musk says that ai is more dangerous than nukes but is still going to make r…
ytc_Ugw--8kHQ…
G
The Turing Test was supposed to be this massive hurdle, right? Some even saw it …
ytc_Ugx-Dv6_J…
G
I think you are wrong in almost everything. Uber-Driver isnt a new job, they are…
ytc_UgzxYsE44…
G
It seems like everyone else talking about and using AI is using different system…
ytc_UgztJGt2C…
G
Bro Sora is so fucking trash. The videos generated by it do look good on the sur…
ytc_UgyloB7gr…
G
AI is designed to tell the user what they WANT to hear. Every AI statement is d…
ytc_UgxpHkG05…
G
I actually agree that AI art does have soul. Although it is a fragmented franken…
ytc_UgyfNDGbK…
G
"AI's role in education extends beyond the classroom, as it can facilitate lifel…
ytc_UgzXGSDLy…
Comment
I love Stephen Fry, but a lot of these predictions are beyond speculative, especially where they meet the physical world.
Just one example: if GPT-6 becomes dangerous, it's always possible to pull the power of the datacenters, or even bomb them. Rogue AIs 3d printing armies of killer drones? Again, destroy the plant.
This video distracts from the real dangers, which I think are threefold.
First, there is likely to be mass unemployment. I've been predicting this for more than 25 years. As our systems increasingly become scalable, fewer and fewer people are necessary to run them.
Second, military technology will become even more dangerous. I think that the war in Ukraine is the breeding ground for this. Once there are autonomous killer drones in the world it's a simple step for them to be used by groups like the Iranian Revolutionary Guard to carry out attacks.
Third, similar groups will try to use open source models to create bio-weapons. A friend who is a PhD CEO in pharmaceuticals says that it's much harder to create and deploy such weapons than we think, but I still expect it to happen.
youtube
AI Moral Status
2025-07-12T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzzmk3qv60b9rhhJDR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8cJFu53aVrXzeO8p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6rhk3pkhsiCFA3CB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwFXRKKE8gQ2u_kvWV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzKPEMgBhpFKK5xmgx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEKvuU7r0hTku-mQN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVicA4rp3Yf49154N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoDt1txHnUatRQccF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyBbam9hcw7YIqG_3t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyl7DvKAaXllbxYeUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]