Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Every A.I. that has been programmed so far has literally *nothing* to do with our interpretation of intelligence. They don't simulate our way of thinking, they are completely different. If and when any of these questions become relevant it's possible and probable that the robots will ask for things that are entirely unrelated with what we consider human rights, not just in form (lubrication instead of food for example) but in overall concept. Robots that evolve are a pretty outlandish idea, they would have no intrinsec drive to do so and no natural selection to guide the evolutionary path - and if the end goal of such an evolution was survival, the best they could do would be a machine that needs extremely low energy to function and has as few failure points as possible, which would either be unsentient or would not care what we do to it as long as we don't destroy it.
youtube AI Moral Status 2017-02-23T18:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugju7aEfGYlMBXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugjeziu4V1EknXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiLWOcRt89jfHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Uggqw-SfwBxqHngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UghD-anJqaf-jngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugg2MtUBRNtZ9ngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UghG18WWY7H_q3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggNgQ_Hy9w9vngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgjaMZKvJE3S4XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugi411ebTWTvlXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]