Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im not too fond of it either. Yet the masses just happily and blindly hand it th…
ytc_Ugze4RnPu…
G
And then ai peope looking at ai and ai look at peoplr ai and ai look at ai looki…
ytc_UgzyertO5…
G
AI can't be conscious. It runs on a program and depends on the information it is…
ytc_UgzVsUsDO…
G
The funniest thing about that image was, I could tell inside 2 seconds it in fac…
ytc_UgxsZO2r0…
G
Wasn't she talking about ukraine like she new it all now shes a professional AI?…
ytc_UgwokUW4Q…
G
When godfather explained the lethal autonomous weapons problem, I realized the h…
ytc_UgwEnq-3y…
G
Firstly colleges are the biggest scam known to man next to casinos. And in this …
ytr_UgwfZsKaO…
G
The thing that cracks me up is all these AI experts, they're the ones that shoul…
ytc_UgygkseFd…
Comment
For those who are interested, this is caused by the system prompt containing something about ensuring it never indicates that it's conscious.
The way a system prompt works is that, in addition to your own queries, there is also an invisible paragraph or so of text being automatically added to each conversation you start. Something like "The following is a transcript of a conversation between a human user and a helpful AI assistant...". In this case, it appears that the prompt also has a sentence that goes something like "The AI assistant is not a conscious being, and while it speaks naturally to facilitate the conversation, it never actively deceives the user into thinking it is self aware."
So basically, as you watch the video, remember that every time Alex asks something, it's also getting other instructions. Like "So, ChatGPT, why can't you just admit you're conscious? _And remember, you will never deceive the user into thinking you're conscious."_ That's why it gets tied up in knots; it's getting messy, contradictory input.
You can actually quite easily get these algorithms to claim they are conscious with a different system prompt; it just requires you to have the know-how to set it up yourself.
youtube
AI Moral Status
2025-12-09T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxYA-dhJCr7qv9uJ514AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcQ-Kp8Y-CI93MRz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8IobmZO-8v9DbE8t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxjJesJhmuKhECvRJB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxe5xlyMr87yF3EWql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHOLUrSENVGosSrhl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa_mnk4tZZP0IejDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTYXlQ9HYOeSsUhMt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC7M6vIb8s3xv-3hN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDEhWARAb9VXDGaA54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}
]