Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is evidence from evolutionary psychology that helps explain this. People are predisposed to see agency. The hypothesis is that the evolutionary cost of thinking there is a member of another tribe in the bushes who wants to kill you and being wrong is far less than the cost of thinking there isn't someone there and being wrong. There is evidence for this from experiments with children (they show them shapes interacting on a screen and the kids say "the circle is trying to help the square get over the barrier") as well as from adults. I've been working in AI since the early 80's. In the very early days of Expert Systems, Weizenbaum at MIT created what today would seem like a really trivial toy system using rules. It was all meant as a joke and was a mock therapist. It would match basic patterns and give appropriate responses and had some default rules when nothing matched that just cycled through various phrases like "Please tell me more". To his amazement, he found people were interacting with the system as if it were a real therapist. Telling it their deepest feelings and problems and forming personal attachments to it. If people could be fooled by such primitive software, it's totally predictable that people will be fooled by LLMs. See: https://en.wikipedia.org/wiki/ELIZA for more details. Also, ChatGPT in particular has improved a lot in its ability to have long term memory about specific users. I know how this stuff works and even I'm surprised at times with how well ChatGPT can remember past discussions and understand my questions using our history to fill in things I don't say in my prompt. ChatGPT is also really good at "yes and" ing. That's a phrase from Improv where you always try to build on what another improv person does rather than negating it. I often have to give it prompts like "Please critique" or "Please find flaws in this line of reasoning" to make sure it isn't just reinforcing what I want to hear.
youtube AI Moral Status 2025-07-10T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwK8WqewuNUbi_RZf14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfAavUbiPIPK-v-9Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxzt_CYBvhXhjdj0vV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz_rgo7yTes-Pbf4ZZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxrFPIYrHn3I3dDHIp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzT0g_FosaSfP2IWs54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYuDJeO2Zmr-yYEAR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRDKJcY0UITzW2Mi54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy193Ts06Awn0aXZ7F4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwh7YwKlZTsxgPjoqV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]