Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, I briefely studied AI myself and I can say a couple o things (sorry for my bad english): 1) "pain" should be embedded into programming if we really want to develope AI, expecially sentient AI. I'll try to make it as simple as is possible, but, in order to gain self conscience you need to set a boundary between you and the rest of the world; Therefore perception is needed; Here is the trick, because the sense that can create the most definite boundary is the sense of touch; and pain is strictly related with touch; there're also other reasons why pain should be embedded in AI programming and one is the need to teach the machine self-preservation. So, sentient robots will be probably equipped with a pain routine 2) the so called "tecnological singularity": I think there're tons and tons of misconceptions related to the idea of a tecnological singularty; first of all, the computers will be able to understand how their "mind" work; ok, they will be "created" and they will be equipped with intelligence, but, most probably, that intelligence will be born by accumulated experiences, spurious data, and optimization will not be just as complicated as it is for human brain; and second, to understand the complexity of a brain you need a full brain; meaning that, to create a powerful brain you'll need a much more powerful one: this meens also that we probably will never be able to create an AI as capable as an adult human, leave alone creating an AI capable of creating something even more powerful; well, it's much more complex that this, but I can't write down an essay in a comment on youtube.
youtube AI Moral Status 2017-03-17T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiTebkfieqsNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjPFNKGEfJJvXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ughlafxc3u-Z_3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Uggc1lpMfLEMgXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UghyKvMquT5eH3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgjuY7lkZrYUyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughe6jj7xQH_BngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ughx-o3mGLD-GXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjPAY1I3j0r43gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugjg1AWphI3dU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]