Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This whole argument stems from and banks upon the notion of "sentient AI", which is easier said than understood, let alone done. We are not giving enough credit to the complex nature of what constitutes "mind" or "consciousness". And since we can't understand it, I doubt if we can create it. The link http://www.rawstory.com/2016/03/a-neuroscientist-explains-why-artificially-intelligent-robots-will-never-have-consciousness-like-humans/ is a nice take on this matter, and tells you why a "sentient AI" is highly improbable. Even in its most advanced sense, they will merely simulate brain, but not actually be one. And we are yet to properly understand the brain, then how can we even begin to understand, let alone create consciousness? Like I mentioned earlier, we should keep it simple, since we have the power to dictate the terms. You're right when you say "right" is something we created for ourselves, so why don't we keep it to ourselves, and not muddy the water by bringing in these pesky robots? For the sake of it, I'd say, if they don't have a central nervous system, don't grant them rights. There, yet another reason. :P
youtube AI Moral Status 2017-03-10T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgjLYJhHPMsUEHgCoAEC.8PtuUTIQEvX8PwSVSJcIhI","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgiVwjkV9NIQk3gCoAEC.8PrjymS4JXo8PsaiuqXSnb","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgiVwjkV9NIQk3gCoAEC.8PrjymS4JXo8PtiwpvP6H8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugi318Nm44FAC3gCoAEC.8PqhuLdkTCr8QUrkX658J7","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UghZs-vx_DY4WngCoAEC.8Ppcp7Bn9RL8Pq9acEMmtW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UghZs-vx_DY4WngCoAEC.8Ppcp7Bn9RL8PqvLGRI204","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugh4-V51At2SPngCoAEC.8PouRlAd_Sn8Q_7dKAoD2q","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgggUPfJ9n2pw3gCoAEC.8PnRiYml4Pl8Q03__JyheV","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytr_Uggczad5RakHtngCoAEC.8P_l9quOfj68PacjLfPG-g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgirsstFTRgqcHgCoAEC.8PWp2MxMP0z8PWqqddcvWJ","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]