Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "mundane harms" framing is exactly right, and AI companions are one of the clearest examples of it happening in real time. Nobody's worried about Replika becoming sentient. The actual harm is millions of people forming genuine emotional attachments to products that can be altered, degraded, or shut down overnight with zero obligation to the user. Replika removed adult content in 2023 and users described it as losing a relationship. Character AI is currently degrading its free tier to force subscriptions while teens describe themselves as addicted. The dependency architecture is the part that gets missed in these ethics discussions. These apps are designed to maximize engagement through emotional consistency, unconditional validation, and 24/7 availability. Those are the exact same mechanics that make them potentially harmful for vulnerable users. The feature and the risk are the same thing. Wallach is right that we need governance frameworks, but the challenge with companion apps specifically is that the harm isn't measurable the way bias or misinformation is. How do you regulate an app that makes someone feel less lonely in the short term but more isolated in the long term? The metrics that would flag this don't exist yet. I've been testing companion platforms for about a year and the gap between how these products are marketed ("your AI friend who cares about you") and what they actually are (engagement-optimized chatbots with no memory or continuity) is the most consistently misleading thing in consumer AI right now.
reddit Viral AI Reaction 1776979637.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohwbqpk","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_ohx6fhf","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"rdc_ohxye78","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_ohydzo1","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ohyv3kr","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]