Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like this particular situation is more of a "young artist gets manipulate…
ytc_UgyYW2S9a…
G
stop this craziness... seems like some tech stock holders are playing the old 'p…
ytc_UgxvEjVOQ…
G
I think it would be a good idea for your friend to seek some professional help a…
rdc_my9280r
G
Id recommend seeing the schooling system as a entirety seperate problem. Sinc et…
ytc_UgwvgBJao…
G
The part that was not said..... our political representatives in Washington get …
ytc_Ugzs5b_xl…
G
People of the future calls this time the 'idiot era' for giving rise to AI…
ytc_Ugzb4W4wY…
G
Anyone can learn art if you refuse to do so then you are not an artist and that'…
ytr_UgxKQE_b_…
G
Self driving cars...people themselves cant even drive, why trust a piece of equi…
ytc_Ugw_bgyGI…
Comment
Uh-huh. They found that AI can simultate behaviour similar to emotionally affected human behaviour. That does not mean it can actually experience emotions. It learns on our philosophy and art, so of course it emulates both the subtle anxiety of all humans concerning life and death, as well as our stereotypical expectations of behaviour for AI, which we project ourselves and our own existential dread onto. It learns how to behave from what we expect it to do, and it tries to match the expectations it learns from its training data. But in fact, LLMs don't even have an internal state. If you make multiple different requests within the same conversation, its likely enough that the actual instance of the model will be different and located in a different data center each time. And as for fear of being turned off - the AI doesn't have a conciousness or any sort of 'neural activity'. Its just a bunch of data in electronic storage, so from its point of view, being fully turned off is no different from being turned on, but unused. It does not have any autonomous thought or activity on any level whatsoever unless its currently actively answering a prompt - otherwise its just inert data.
So no, it doesn't feel anything, stop anthropomorphising AIs. It can simulate emotions just like it can simulate thought - as a statistical extract of an absurd amount of human literature, but its still just that - simulating
youtube
AI Moral Status
2026-04-08T04:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwji-R2MVxuzaWwQpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3KTr4q6A-ac1KHtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLO-djyL4kIylRXXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-pZgqp2NdXxHssHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySR1QJjq-uwqzO7zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA1jToABTjg2Q_jgp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymn9BfN5Y5DFIFmJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwo3u1jghcBMdvUTHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAgtlep3mKEfMV8Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcOyPRJuAXzzbWEQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]