Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No. We shouldn't. And as a matter of fact, we should do our utmost to make sure they don't develop emotions, let alone self-awareness. Why wouldn't our machines eventually feel threatened by us on a fundamental level? Why wouldn't natural selection eventually apply to an artificial organism? For God's sakes, people. We're discussing the ethics of robot slavery when the bigger issue is that we're developing artificial intelligence and we have the arrogance to believe we can control whatever comes out of that research. There are two fundamental choices here: 1. We develop artificial intelligence and for some arbitrary reason, we become a utopian society. 2. We develop artificial intelligence, it develops into a superconsciousness of such magnitude, we can't comprehend it, and it kills us -- maybe even by accident. Why are we taking this risk? There really are only two things that can ultimately happen here, and one seems to be motivated by blind optimism.
youtube AI Moral Status 2017-02-23T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ughl60i8UNuZ33gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UggsejZHpbOIVngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgiV1E7PukZT4XgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggKzW2ciku-zXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghweSfI0nYVWngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg8fO3M_unSGHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugj7RNLCjEBWvHgCoAEC","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh569BJcttDZngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggWTQfrxUnEB3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgiDys-YlChdGngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"} ]