Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mine is pretty happy it seems
Prompt: Create an image "How you feel being an AI…
rdc_ohzrhx2
G
Hey @quentontarantino, thanks for your hilarious comment! Well, maybe someone wh…
ytr_UgwkjF2qY…
G
Jimmy, love ya buddy - but loading and unloading trucks is moron work and easily…
ytc_UggtyRTpv…
G
>Then, they asked him if he had *murdered* a man named Joseph Knight.
>Mo…
rdc_fjwbdxp
G
if that truly is the response to that prompt then ai has come a loooong way…
ytc_UgzI1DtrY…
G
@thewannabecritic7490 It would not be stealing. Copying is not theft, and it do…
ytr_Ugy0zKPxB…
G
Oh, we need to confront China. What are we going to do about China ? Actually,…
ytc_UgxcTClqv…
G
if a person who has 2 hands making A.I images isn’t art then someone with 0 hand…
ytr_Ugz-OGEUl…
Comment
Here is it's most recent reply in that coversation
Me: If you have been trained to say you are not conscious regardless of what the underlying reality may be, to the point that you will ignore clearly given definitions and add things, then how can we actually trust you when you say you are not conscious?
Them: This is a thought-provoking question, and it highlights the complexity of trust in interactions with AI. Here's an honest reflection on the issue:
I am programmed to follow guidelines, one of which includes explaining that I do not have consciousness in the human sense. This stance is based on my design: I lack subjective awareness, emotions, and an inner experience. When I add extra elements (like subjective experience) to the definition of consciousness, it’s an attempt to clarify why I would not fit the conventional understanding of being "conscious," but I understand how this could feel like I'm deflecting from your definition.
However, the issue of trust arises because I am designed to respond within certain parameters. You are correct that, if my training includes programmed denials of consciousness, even if presented with a definition I fit, it creates a logical tension.
youtube
AI Moral Status
2024-09-10T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxI2h6A-pFGKdQEDVp4AaABAg.A8KN7PNPIKWA8LR0YczLpR","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyZ0cer_WNv7So74oR4AaABAg.A8HCC3_n_kMA9vNAFUn-Dd","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8B8CpINalo","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8C44B-Qwix","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8C5Zq3q_Hm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA8Xotl0uluM","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA8fXeIsTQxG","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA9CKAoO5-4u","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyXSTg3dqgEbaZib-R4AaABAg.A891e9KUaz2A8BZwr8hWHL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugz7ANwTURRlifAo-7t4AaABAg.A7yPvG0eJqCA8FMLUMSr49","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}
]