Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The issue is that the AI image is not of the young girl, it just looks like her.…
ytr_Ugxd9nQEZ…
G
We need Moses to save us from the industrial revolution of AI technology and AI …
ytc_UgymWG_T1…
G
Ai art sucks, normal people just can't see the lack of details, like how in Ai a…
ytc_UgyV-enPt…
G
If we all go extinct than our AI likelynesses can chat with each other till the …
rdc_o64goy7
G
When the attacked is willing to throw the first punch. This isn’t a playground f…
rdc_ky8xxgp
G
I feel torn on this, not because I dont see the danger or the fact that the way …
ytc_UgwseIC7d…
G
If you can quit sniffing your farts for a second, go ahead and watch an episode …
ytr_UgzvH2V5l…
G
If you can't put food on the table, you will be very creative. Btw, who is goin…
ytc_UgytO7NS-…
Comment
The error being made here pods the user of the word "know".
Imagine I wrote a simple program in Basic to respond with the same answers in the same order. You then repeated these questions hitting space after each one to move the program forward, and got the same responses. Would you even begin to consider those answers to be actual knowledge? Of course not.
That is ssimpler version of what is happening here. ChatGPT doesn't actually "know" anything the way humans do. It is just capable of stringing words together to give a response that sounds good but there is no true understanding there.
It can say something that is a lie, but it can't decide to lie, it just responds the way it's training data has trained it.
youtube
AI Moral Status
2024-08-16T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw9sEpPCgf2GV2TWr54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtE4R-gAp9Zr1oTPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_UgwxPsKuZ6B1fCDWwjt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmr8t2CMkgZ35GuU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxITIL0GMVwnuKqeyV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwn-ld0aK9YH1pe8qd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugysh2RBszZHotmbxpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWk0_F6zOoRkwYJRZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHDtt0OgiaMlyvDsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCUhniAXiUjpCX4el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]