Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I genuinely wonder what those AI people want, how can they have so much energy t…
ytc_Ugz7pDNmH…
G
I believe AI is the distraction. I believe social media is where they have us by…
ytc_Ugzaz53O5…
G
People don’t understand. AI is just a specific example. The enemy of the people …
ytc_Ugw8fIb6F…
G
Holy shit, I came into this expecting to make a "Same, bro" joke in the comments…
ytc_Ugzbzwk2X…
G
we must do a revolution agaisnt the AI
Ai is here to stay? it will stay dead, pu…
ytc_UgzBJ9OjC…
G
Yep, Japan not caring about America’s ‘flagship’ OpenAI / ChatGPT should be a b…
rdc_nzzi4b0
G
As someone who sucks at drawing, id rather be awful than use ai. At least my ter…
ytc_UgzU83YbP…
G
Even before we got to this stage in AI, we ourselves already dreamed up a world …
ytc_Ugx68Nro7…
Comment
You have a point, but a large part of the discussion is that we wouldn't even know if it DID develop a consciousness because it's so alien. It has no feelings. No physical body. No real incentive to do anything that makes sense within a human's frame of reference. And that's super creepy.
Secondly, the AI does seem to show preferences outside of what was programmed. In that anthropic study about the AI either killing a human or getting turned off, the first baseline test just went off the current model (with disturbing results). They then went and changed the programming to give it explicit instructions to, no matter what, NEVER harm a human. This reduced the chance of it killing the human, but a lot less than you would think. Like down from 70% to 30%. Most disturbingly, they got the best results when the AI figured out it was a test! (To like, less than 1%).
This means that, despite what we think the AI should logically do, it goes and does something different. And they couldn't get that % chance to 0 no matter what they tried.
youtube
AI Moral Status
2026-02-13T00:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwAT8rLo8Mp_a","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwAT98ViBPB1B","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgxtOHpYkiOjd13ruUR4AaABAg.AQUvvHhjvS-ARp9yVwZtpe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxtOHpYkiOjd13ruUR4AaABAg.AQUvvHhjvS-AT_xlE1vcZq","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgwcwCiIPqeKIQv97Ix4AaABAg.AQ7Dm5Z_v0XAQ7FMI2q8yo","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgxCX80X9CDEhqTQ-PN4AaABAg.APyKMTrwipWAQADMNOM_4w","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgxCX80X9CDEhqTQ-PN4AaABAg.APyKMTrwipWAQogeHqgww3","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgxCX80X9CDEhqTQ-PN4AaABAg.APyKMTrwipWAQolp3TSRnh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgypMbjp_0O-0bRAUHx4AaABAg.APvsbdWBo8FAQ9bnPFQF2Y","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgzjahGQGIAX-4I4mCR4AaABAg.APVzGi6IUhgAPWQFRl2jP6","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]