Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it's partly related to ideological ideas, for instance in the west we believe in Freedom and Capitalism (for most anyway) and it's what we're raised with. Problem is Freedom and Capitalism won't work together in regards to robotic rights, Capitalism will demand robots be exploited to their highest potential whilst Freedom will demand they be given equal rights should they achieve sentience. It'll either have to be a compromise such as exploiting robots but not creating the AI that allows them to become self aware or putting them on an even playing field to humans as far as their abilities and cognitive functions go. Problem is we can't stop someone from developing that AI on their own, so self aware AI is an inevitability unless programming becomes some sort of heavily regulated and sacred thing which let's face it won't happen. Rock and a hard place and will likely end in the eradication of the human race to make way for robots and androids. The problem isn't going to be the robots, the problem is us right now and the fact we aren't doing anything to prepare for these issues.
youtube AI Moral Status 2017-02-23T13:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiMDrD_Vrtr3XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ughtx1nCpoQ4SngCoAEC","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgirAlPByFXXAXgCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgipRwhPaz5NkXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UghFbYbOTNyzhngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjJuvM51gMYDngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjUDH2osqQ9pngCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UghKIUqvylSSt3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgiTpSDa_d0drHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugh5Sf2tvcldOHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]