Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't know if programming self-awareness into a robot or AI would be essential / effective. It really goes down to the question of building a "soft AI" and a "hard AI". Soft AIs are only capable of doing what they are programmed to. The AI which plays Chess would be a perfect example. It does try to find the best output it can make and will certainly evolve in terms of the actual capability it can handle in its assigned job; Probably the best AIs we can have and achieve for now for us, since they can perform some assingned tasks way (WAY) faster than we can. Easy to control, since their actions are entirely based on programmed tasks. What are the so called "hard AIs" then? Hard AIs are the ones that are capable of actually rebuilding itself into an entirely new AI with new inner structures, potentially. They are just like humans - they adapt to their new environments, capable of analysing big datas on internets and webs and might develope itself to have emotions, if its necessary. At this point it will be pretty hard for us humans to control it, which is the major difference compared to soft AIs, since they will evolve so fast to the point that they will transcend beyond our achieved knowledge, stacked throughout decades in just a matter of time. They will create their own "better" AIs which will possibly have more potentials than the previous one What I am trying to say is, will we ever need a Hard AI? What is the merit of having a Hard AI? Because everything we do nowadays may and will be done by the Soft AIs. They will do their jobs way faster compared to us, they are enough for us. We do not need Hard AIs that might possibly be impossible to control with our hands. Why do AIs need emotions, when they can still perform well without them?
youtube AI Moral Status 2017-02-25T19:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiTBs6wCcGnmngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjPgAGBk5lkMXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjFmhjGbKZhE3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjUDhYIxaSYa3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugjmr977kLtIpngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UggozFPMngYDmngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughn4mH5-Yvet3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UggLcc9GxtOJrngCoAEC","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugh599rq62naI3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiyBd4oaw_TlXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]