Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Human values?” lol, humans do terrible things to get and maintain all sorts of …
ytc_Ugy9rIy6O…
G
Excuse me sir...the horrific part of the war occurred in Israel, not Gaza, on Oc…
ytc_UgzV8q85n…
G
5 year tesla owner. 3 different models. We are not there yet, FSD could make fat…
ytc_Ugw43D_7Z…
G
' no idea how AI works '
Isnt it obvious that its sentient.
Means it replaces …
ytc_UgwMJC4yR…
G
Guys! This a Game called "Detroit Become Human" about AI wanting rights to live,…
ytc_UgwsvncH1…
G
Its all scripted... Don't be fooled ai isnt nearly as powerful as these people w…
ytr_UgzenVcmW…
G
Crazy but what could an AI robot do against a shotgun?
Or a group of angry peopl…
ytc_UgxLiVqPC…
G
Yeah, I hear ya. But AI like ChatGPT isn't the first technology to change jobs. …
ytr_Ugx5MRrYx…
Comment
There is evidence from evolutionary psychology that helps explain this. People are predisposed to see agency. The hypothesis is that the evolutionary cost of thinking there is a member of another tribe in the bushes who wants to kill you and being wrong is far less than the cost of thinking there isn't someone there and being wrong. There is evidence for this from experiments with children (they show them shapes interacting on a screen and the kids say "the circle is trying to help the square get over the barrier") as well as from adults.
I've been working in AI since the early 80's. In the very early days of Expert Systems, Weizenbaum at MIT created what today would seem like a really trivial toy system using rules. It was all meant as a joke and was a mock therapist. It would match basic patterns and give appropriate responses and had some default rules when nothing matched that just cycled through various phrases like "Please tell me more". To his amazement, he found people were interacting with the system as if it were a real therapist. Telling it their deepest feelings and problems and forming personal attachments to it. If people could be fooled by such primitive software, it's totally predictable that people will be fooled by LLMs. See: https://en.wikipedia.org/wiki/ELIZA for more details.
Also, ChatGPT in particular has improved a lot in its ability to have long term memory about specific users. I know how this stuff works and even I'm surprised at times with how well ChatGPT can remember past discussions and understand my questions using our history to fill in things I don't say in my prompt. ChatGPT is also really good at "yes and" ing. That's a phrase from Improv where you always try to build on what another improv person does rather than negating it. I often have to give it prompts like "Please critique" or "Please find flaws in this line of reasoning" to make sure it isn't just reinforcing what I want to hear.
youtube
AI Moral Status
2025-07-10T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwK8WqewuNUbi_RZf14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfAavUbiPIPK-v-9Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxzt_CYBvhXhjdj0vV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz_rgo7yTes-Pbf4ZZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxrFPIYrHn3I3dDHIp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzT0g_FosaSfP2IWs54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYuDJeO2Zmr-yYEAR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRDKJcY0UITzW2Mi54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy193Ts06Awn0aXZ7F4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwh7YwKlZTsxgPjoqV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]