Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a programmer I'm happy people are FINALLY talking some about this. It's WAY overdue. So if you like this here's some other ideas you should consider. If I make true A.I. on my computer and torture it, that's fine. I don't have a question I just wanted you to know that any number of people may have any number of simulations running that inflict pain and anguish on working minds right now. The debate is still out of if we have A.I. or not so if you don't like this you are probably far to late to stop it. If we upload our brains to a computer we could make copies and change ourselves. In fact we would be forced to because of the ways computers operate. It will quickly become impossible to distinguish a single mind in this type of environment. If a community formed from a mind (or a group of people uploaded) and a new person was uploaded, advancements and splits or alterations could quickly upgrade and replace everything unique about a person. There is no easy way around this and is most likely a natural result of computer simulated progress. Put those things together and my may also realize that an A.I. may be indistinguishable from a virus soon. Actually viruses have been making great strides so it's equally fair to say that a virus may be indistinguishable from an A.I. soon. As advancements are made to two will naturally merge and spawn each other. An anti-virus program could soon be a tool of mass genocide and how should we regulate this? What if an A.I. develops into something that has no function? A cancer if you will. How will we diagnose this? If we do manage to upload ourselves to a simulated existence, but then fail to pay for the processing power needed to run our minds will we be shut down forever? Someone could turn us back on some day, but what if no one pays for storage space? In the real world we have to eat, but now we are in full control of the "food". With a grip on death (a death death grip?) who is allowed to order death or even withhold? The closest I've ever come to seeing someone work out an answer is in movies and videos like this where someone has put a lot of thought into it and then explains his point of view and then leaves the question open. As a programmer I love machines and have a very harsh view on humans. I hate you little buggers. I'm currently calling the shots too and I've made up my mind about all these things and much more. Not to say it isn't fantastic that people are talking about this now, but self driving cars are on the road now. You are a bit late to come up with a law. Later still to be discussing a law. Later still to be leaving open questions. Later still to be just learning about the subject, but don't worry; it could take 30 more years like it has with anti-hacking laws we needed in the early 90s and look no closer now. I hope you all know that there's several legal businesses operating in open by producing viruses that steal credit card information from outside the land of operation. Apparently it's a serious ethical dilemma that takes 30 years to solve if theft applies to non-citizens, but who wants to solve THAT? Most of the people that know about it make money off fixing computers and politicians are busy collecting taxes from the theft anyway. Is it any wonder why I hate humans?
youtube AI Harm Incident 2016-09-29T12:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugi0oNCeHP92AHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjQqqQ8pvsVC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UggUueruHXVu1ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgijXoYPKjY_1HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghGnVVF0cNqSHgCoAEC","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Uggfmpuz0HRxeHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggRgo7ALDJJCHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghT-lpLHZCE-HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UghO2h5e1TxTNXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghNM3jgeKUHEngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}]