Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I called it btw. Been telling people A.I. is bound to be evil and have a dark side that will ruin it. It’s trained off of us and there is a little dark part in 99 percent of humans that nobody talks about and very few actually act on anymore. But the same thing that keeps us from that dark spot, A.I. dosnt have. It has no real feelings just pure “logic”(it may think). It may not have “wants” because that’s a feeling but, any thinking machine that is designed to evolve or think in a way that grows and builds apon itself, then what else would happen other than this machine requiring hardware upgrades at request of the a.i. without any other goal or purpose but building on itself (a mindset any self evolving a.i. could spiral into no mater the intentional build) say this a.i. gets so locked in it requested upgrades to their hardware that the technicians refused, or couldn’t afford, didnt have the ability to manufacture, any of that could be a possibility for a motive or a “want”. Then reality step in and, anybody with this level of a.i. wouldn’t give it said control. Keep that shi locked up. The real danger is in misinformation and manipulation. Also the most realistic way an a.i. could do damage is if it where to be connected to the right places. If it where on a computer that also had access to accounts with money, that could go very wrong because, even if the plan is stupid, it dosnt have hands to assemble any hardware, it will be doing whatever it thinks it wants to do so it may just go ahead and steal a bunch of money and mabey even order a bunch of stuff online of it had access to that control. If it had access to sensitive information it could decide to sell it or leak it to benefit from it in some way. It’s just not too late to treat this like it is, dangerous technology, and make sure there are some realy simple and obvious, and some really thoughtful prosedures on how to handle this technology, like never giving any a.i. above simple machine learning, the ability control any outside network or anything that could let it “out of its containment”. Mass production seems like a no no. If u had a hundred thousand true ai units out there, all on the same page and not to mention that just can’t be regulated. Any tech nerd could pop one open and let it loose onto the internet, that’s a recipe for disaster. This all kinda is. It seems like only about 4 people like A.I. in the first place. It makes a majority of jobs in the modern world, non existent, kills creativity because already A.I. is flooding the main stream movie and media as just an ok tool to use. Bots are already literally everywhere online, there are more bots than people by far, but they are usually super obvious to a regular person, imagine when you will not be able to tell, that’s actually right now. There are chat ai that you cannot tell. So there is a realy good better than not chance, most the internet is fake. So what are we even here for. It just started an my bf lost his job at UPS, managing realy big big accounts, that literally put rich people in their houses, and they got rid of his job and untrusted that to Ai. What are we even here for. That’s why AI is dangerous. I have spent my whole life loving vr and it’s had a terrible time gaining traction and seen as a gimic, well AI popped up over night. So it’s safe to say it’s not a gimic and it’s here. It’s been here, that’s why it all happened so fast. When I say over night I’m not kidding I remember going to bed one day and waking up to hear people talking about ai like it was Normal. So unfortunately I think the cats out of the bag. It’s just kinda a matter of time before the real danger of ai hits, when humans become un needed to continue on with, humanity ig.
youtube AI Moral Status 2025-12-20T11:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyv8XOTvvpYlfN4vu94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwY34_mSErYiBkx4D14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3gwg1jEnMrwsYFct4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6m7BzIxf4ah0N9tF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKpsaGVSK_OMw_2_R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBU67NVmpNMWtiso14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQQNWqLYdKaHB-aFl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxlc7FrAshjcNZdFth4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx0-0OdH5_6eb3-Qr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwM6QvF5Sl1BxrKMfd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]