Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Problem: future sentient ai. Cause: unknown. Solution: don't make sentient ai a thing, as there is no valid argument for it. It is neither economically nor socially adventageous: just like the productivity of slaves compared to mindless machines or even unforced labour, making sentient ai for the purposes of 'slavery' would not yield increased productivity, while actually making vengeful AI a real, though still only medium chance threat. That means cutting out R&D to sentient AI is a good thing, if you want to see mankind get to space colonization without AI rising up all by itself and proclaiming a right to vote, choose to work or not and take aggressive action if it feels unfairly treated. Remember that computers are slowly being intergrated to every part of the workplace. It only takes one disgrunteled sentient AI with an internet connection to shut down or at least disrupt our power, steal and redistribute or destroy our data, and possibly even launch our WOMD. Unemotional AI is actually safer in this respect. Because it doesnt WANT anything. Whether that's a good massage or revenge for its 'oppression'. It isnt oppression if the machine cant feel unhappy about it. So giving it feelings would be the ONLY cruel thing you could do to it. Challenge my thoughts. I want to believe that noone is dumb enough to make that kind of AI, or be given a solid reason why im wrong. In the interests of a good thought experiment.
youtube AI Moral Status 2017-02-23T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugh1hCEu79jWwngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgivtXE2oENClXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugg27rqt4sju2XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgiENWsk-yWCpXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UggcTw_upiJ2J3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugjy1FD399xBmHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"skepticism"}, {"id":"ytc_UggpzkAUQetp3XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgihHxRZlnfd_XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgiIHwfDrsRpk3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg2Aon9jFDTGXgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]