Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would like to add to this discussion. - This is only my opinion, and I am not even sure if it's the best option. But, it seems to me that we are actually pretty close to an AI becoming conscious. I think that RIGHT NOW is when we decide the parameters regarding what and how much 'feeling' gets programmed in. - I think that little Timmy's hammer is happiest when it's hammering. If we give it the ability to choose that murdering Timmy's family makes it happy, then it kinda becomes our fault for changing it. Make it universal that all AI projects include a preset algorithm that sets the AI's intelligence to a mind-set of service to mankind. I don't consider it as easy as Asimov's three laws, but if EVERY AI were taught to regard the humans as it's purpose. To *feel happy* when tending to the tasks that we set it. I mean, written into the code of their software, then we can give them other rights. Such as people are required to be polite to them, or the AI can video it and broadcast it. (viva la revolution) But seriously, the threat of AI rising up to destroy humanity, really comes down to who we have designing and implementing this software. Someone please tell them to do it well. And an *ABORT* button isn't a bad idea either.
youtube AI Moral Status 2017-04-09T08:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghN0A8SEeh4RXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UghzlaQnMcZyqXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugh6kh87bJEztngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgiK41GCIEutVHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggpE9hB8ZGKUXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugjs7Uuups4vv3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UghF3cqakiS6zngCoAEC","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UggTOrD8M8fPnXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjHaZq_lbQMNHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjHONlI3SmohHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]