Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I love Stephen Fry, but a lot of these predictions are beyond speculative, especially where they meet the physical world. Just one example: if GPT-6 becomes dangerous, it's always possible to pull the power of the datacenters, or even bomb them. Rogue AIs 3d printing armies of killer drones? Again, destroy the plant. This video distracts from the real dangers, which I think are threefold. First, there is likely to be mass unemployment. I've been predicting this for more than 25 years. As our systems increasingly become scalable, fewer and fewer people are necessary to run them. Second, military technology will become even more dangerous. I think that the war in Ukraine is the breeding ground for this. Once there are autonomous killer drones in the world it's a simple step for them to be used by groups like the Iranian Revolutionary Guard to carry out attacks. Third, similar groups will try to use open source models to create bio-weapons. A friend who is a PhD CEO in pharmaceuticals says that it's much harder to create and deploy such weapons than we think, but I still expect it to happen.
youtube AI Moral Status 2025-07-12T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzzmk3qv60b9rhhJDR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8cJFu53aVrXzeO8p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy6rhk3pkhsiCFA3CB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwFXRKKE8gQ2u_kvWV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzKPEMgBhpFKK5xmgx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwEKvuU7r0hTku-mQN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwVicA4rp3Yf49154N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxoDt1txHnUatRQccF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyBbam9hcw7YIqG_3t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyl7DvKAaXllbxYeUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]