Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why add codes, develop tech to enable them to feel pain, death by unplugging, etc....when its not in their nature to begin with, solving a problem can be solved by not creating it from the first place. The idea i wrote above could work only in terms of making the AI understand human nature, and not become evil, by experimenting themselves what human nature engineering is and learn from it to not do something bad just because they might considering wiping out humanity as efficiency vs ethics, but then again, if an AI reaches human understanding it might also shut down those codes to be able to also benefit from something they would not , like these machine rights while in the same time having the so called immunity to human pain or just decide to do in order to take control. I personally think the potential behind AI is as unknown as the space-time itself, and we might get to play with things that will destroy us, going the other way as to enhance our biological structure with robotics to stretch the limits of the human body is by far the most efficient and secure path to controllable evolution for our species.
youtube AI Moral Status 2017-02-23T16:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiveMjZemHGGHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UghOnqpItWsoN3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghxIkKCF0da9ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggQC_X6GCXb-XgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiZTomR8t9t8XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghPnX8p8kXgNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjkdfxV0TC693gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgimBcFcL1grSHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjwqWnr_kYH83gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UghMs1kjBq3vf3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]