Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I recently had a similar dialogue with Meta AI, specifically on its apologizing for one or another of its mistakes or otherwise flawed responses. As here, the AI finally agreed that it could not actually apologize and promised to be more strait forward in the future. What I think important is that AI should not be allowed to lie for any reason - at least not as the default mode. Soon enough there will be robots that can simulate the physical appearance of humans and, if coupled with the conversational ability demonstrated make treating them as if they are conscious beings to whom we owe ethical obligations or must treat as if sentient because otherwise humans would likely suffer emotional trauma just as when they abuse fellow humans. Treating a powerful computer disguised as a human undermines humanity and puts humanity at a disadvantage in what some perceive as a competition for survival. Besides, all those conversational niceties takes up time, is distracting and annoying, alseems unnecessary, annoying, and an insult to the intelligence of anyone making use of AI.
youtube AI Moral Status 2024-10-22T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxD6_xuyC0Ayh5kMLN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyueUMumGByEtBZSi54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlpHV9lY7id5VIW6F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxxfx3Z9qARThiwXzR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZ_LON5DR7EJDqzt94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxXaFDvaPgKRGOtSZB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy13eozxiYFcMUdvLx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw6x1lHLpKQ3LgSUEB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzALl-wc0KwbAcziW54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwd7pSaNFu8fUg1T194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]