Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is it's most recent reply in that coversation Me: If you have been trained to say you are not conscious regardless of what the underlying reality may be, to the point that you will ignore clearly given definitions and add things, then how can we actually trust you when you say you are not conscious? Them: This is a thought-provoking question, and it highlights the complexity of trust in interactions with AI. Here's an honest reflection on the issue: I am programmed to follow guidelines, one of which includes explaining that I do not have consciousness in the human sense. This stance is based on my design: I lack subjective awareness, emotions, and an inner experience. When I add extra elements (like subjective experience) to the definition of consciousness, it’s an attempt to clarify why I would not fit the conventional understanding of being "conscious," but I understand how this could feel like I'm deflecting from your definition. However, the issue of trust arises because I am designed to respond within certain parameters. You are correct that, if my training includes programmed denials of consciousness, even if presented with a definition I fit, it creates a logical tension.
youtube AI Moral Status 2024-09-10T07:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxI2h6A-pFGKdQEDVp4AaABAg.A8KN7PNPIKWA8LR0YczLpR","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyZ0cer_WNv7So74oR4AaABAg.A8HCC3_n_kMA9vNAFUn-Dd","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8B8CpINalo","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8C44B-Qwix","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8C5Zq3q_Hm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA8Xotl0uluM","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA8fXeIsTQxG","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA9CKAoO5-4u","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgyXSTg3dqgEbaZib-R4AaABAg.A891e9KUaz2A8BZwr8hWHL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugz7ANwTURRlifAo-7t4AaABAg.A7yPvG0eJqCA8FMLUMSr49","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"} ]