Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
America will solve this problem by creating some robots with money and most of them without, or with only a little. The more money a robot has, the more rights it gets, including the right to take money from robots who have less, and use that to buy even more rights. Same as with people. Maybe the robots with rights will need to invent a philosophy to reassure them that their good fortune and the misery of others is somehow justified and good -- but that's only if the robots also have a vestige of conscience; if we're careful, we can make sure to build our robots without that, and then they'll never need a robot Nietzsche or Rand. The thing about this system is it will also handily answer how robots and humans will relate; robots with lots of money will have rights, humans with little or none won't, and that means we won't have to figure out how to write laws differentiating humans and robots. I'm sure we'll start off assuming the right of humans to own robots, but we may find in the end that robots end up with all the money, which by our sacred principles would give robots the right to own humans as slaves. And anyone who didn't like that system obviously would be a loser, a taker instead of a maker, jealous and hating success. Sad.
youtube AI Moral Status 2017-02-23T19:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningmixed
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgjPqjMt-pkLvngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugi9SRakL2Bf_HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjPwLSLOlA_KHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgiPbWKgGNLcRXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Uggvb1v6xsKYq3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UggZ2V9k50b0dXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgiqvUOCuxWU63gCoAEC","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugj7P2k1eqTkb3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjXM-FDR0ab8HgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj_Bi6c1REBTngCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"} ]