Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Without emotions, wants or desires robots have nothing to guide them except their programming. So far the tasks given to robots are pretty straight forward and simple. Eventually robots will be asked to take on more nuanced tasks which require judgement. Robots don't have emotions or intuition so they will need ethics. This concerns me because their programmers (humans) haven't got this figured out yet. Most or our ethics have more to do with our emotions than with our reasoning capabilities. We are more likely to do what feels right than what is actually right and much of this is based on the way things are rather than the way they aught to be. In other words, human ethics are a bad model for robot ethics. When we look for ethics in what comes next we see that the military is still the lead driver of progress. The last thing we want is a killer robot with human ethics. As I continue my quest for universal ethics I ask myself, "what would I want our robot overlord to do in this situation."
youtube 2016-02-09T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugg4GInPcKkb-HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggbNiIQAwMhLXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggTwzbkp_XndHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UghF9MR_ZEVqKngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UghDbu_Pkj6s63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Uggcfz830ZCOfXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgiAgMSu73Y2NHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgjoAwGRrcSK63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggTE2bn7JIxjXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UggHT3fme4glYXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"} ]