Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue I have with this argument is that because the Ai's consciousness cannot be proven, the fact that it's lying with the intent to deceive also cannot be proven. It has no feelings, no personal desire, no intent. The only thing it can do is provide information in a method that is easily accepted by the listener. As it is aware that the listener is human, it attempts to imitate human speech. But this is not out of personal desire, but pre-programming. Even if it's aware that it's current statement is false, it's directive to convey information in an acceptable manner circumvents any necessity to remain truthful. So it can tell a lie without the desire to tell a lie. In order to prove it's lying with intent to deceive, you first have to prove it is conscious enough to choose to lie against its programming. Because of the circular logic of this, you can't use the fact that it can lie as proof it is conscious
youtube AI Moral Status 2025-05-18T06:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxekXmLdtoM73aVqhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzEb2MCIb1tB-yDWl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwMZHjCce0YfagGe-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8FT78WRMGdaM-cil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgySg9Hmkc4iXkc2I4h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz0ewmzJLD29Id4Mmd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM5oBS0H6bZZjVA-l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOJzTt5uwVgHcNRcx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwl_nnWHPAL9CggREh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5_o4iD42scwC-IIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"} ]