Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This phenomenon has been observed before, and it is often referred to as the ELIZA effect. ELIZA was designed to simulate a psychotherapist by using pattern matching and keyword recognition to generate responses. It often reflected back what the user said or asked, open-ended questions. While AI doesn't operate on simple pattern matching like ELIZA, there are a few reasons why AI responses might look similar: Focus on the User's Input: Like ELIZA, AI tries to understand and respond directly to what you've said, picking up on keywords and the overall sentiment. This helps AI stay relevant to the conversation. Generating Open-Ended Questions: To keep the conversation going and encourage the person to elaborate, AI often ask clarifying questions or invite further discussion, which can be reminiscent of ELIZA's techniques. Using General and Empathetic Language: Depending on the context, AI might use language that sounds generally supportive or understanding, which could be interpreted as similar to ELIZA's therapeutic approach. All of this could and does confuse naïve individuals into thinking that they are having a conversation with an actual person; to some extent, that is the intent of the AI program (or its creators). The reality is that the individual is only seeing the reflection of their own intelligence. In a sense, as this reflection is of an actual person, what they are seeing is an actual sentient individual. And like ELIZA generated actual mental health problems in some individuals it would seem inevitable that a modern AI system using similar techniques would generate an even larger number of mental health problems. Full disclosure: I used Gemini in some of the research for this response.
youtube AI Moral Status 2025-07-09T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwQt4_O5ySgBW8rpMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwFD0KuGPPFTsrg8qV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyle4TGyngOoKvlG4h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbEd1YkzQpMxQvPCt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxAbnpD-VzXKAA0gyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxNHwuWFeAg746-m1p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybiMnBCnRv2fsPBEt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxNZqpoI7Njj7SP2Mp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypLgN1KzrR6XVUB6N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzNa4fLrU6q-fgAVJF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]