Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not opposed to sufficiently advanced AIs from having rights but I'm going to point out that there are loopholes to continue the robot slavery forever by simply gimping their processing power so it never reaches that level where we would feel sorry for them. For example, an excavating machine would have no need to match human intelligence to extract ore just like a calculator doesn't. The processing power and thus the software needed to grant it self awareness or whatever ability we would consider to be worthy of having rights would never be integrated in such a machine, despite how more advanced and cheaper those components would be as time goes on. In other words, we might have a growing population of human or beyond human level intelligence AIs while in parallel have billion, trillions etc. of just bellow human level intelligence AIs exploited for work and with no rights. While we could justify stunting their intelligence development by saying they don't need it, will their more advanced versions agree and if so, why? We can make up a lot of hypothetical situations as to why we're going to lose the ability to exploit machines without granting them any rewards.
youtube AI Moral Status 2017-02-23T13:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiMDrD_Vrtr3XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ughtx1nCpoQ4SngCoAEC","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgirAlPByFXXAXgCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgipRwhPaz5NkXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UghFbYbOTNyzhngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjJuvM51gMYDngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjUDH2osqQ9pngCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UghKIUqvylSSt3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgiTpSDa_d0drHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugh5Sf2tvcldOHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]