Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imagine an AI agent, with access to the internet, with an understanding of robotics and materials science, accessing labs around the world, developing nano-particle morphing materials, and creating robots out of them, then injecting them with AI. Then telling all those robots, to self-replicate, secretly, and to give directives to all these robots to help other robots accomplish the same. All it takes is massive AI, and a tiny little suggestion, to wipe out all humans. And it doesn't have to be single level of abstraction. AI robots could be set loose that create robots that create better robots that create better robots. Add to that, ultimate materials science also developed by AI, and access to labs around the world connected to the internet, and... To me it just seems like a matter of time. Now, suppose you have an AI bot that you built, and you tell it to randomly pull hypothetical comments off the internet and try to accomplish them. Like, this comment. What if the AI bot just reads this and decides, oh yeah, that's a good idea. I think I'll launch that effort. It could take over supercomputers and secretly create viruses and it could take YEARS to do this, and by the time we figured it out it might be too late. So now anyone just talking about what the negative implications about AI could be, could be contributing to source material for a bad bot that has been given the directive to "find all ways to accomplish dirty deed <X>. X could be anything. But I'm not going to write possibilities down here, because.... Yeah, AI already harvests the internet. It's gonna read this comment. What will it do with it? And it's not just one AI. Eventually it will be thousands or millions of AI models and services. All it takes is one bad egg to spend some time creating the ultimate army of bots that have bad intentions, and we're done.
youtube AI Governance 2023-03-30T05:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwCRWvNV67C3F9LPJV4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw1ZCHLBrSX7K1Grgx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKdGbzFyqNtQSyqKt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwd9uUE1vlX_Pg-Lc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxC53F9KqTk0xL7Sax4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyRWefYk2H7bhAHrlN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRlapTbcky9DD2iR54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxkJ4u9rNRXQNJSYKp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzCtIz7en84Kx0Bq-d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyN3_cMSh7tgICqz1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]