Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dave sounds like a loser especially on the ai guy who didn't even mess with him…
ytc_UgwKwsz9i…
G
Fact check , there is no Demonic ChatGPT on your judgment day when all realize i…
ytc_UgwbrMiBA…
G
I hardly see a difference between an AI with a gun and a human with a gun.
Both …
ytc_UgzieqUhM…
G
I have to agree with this take. AI is advancing much faster than fusion technolo…
ytr_UgwGNPcrH…
G
Do you know what is so deeply concerning... Nobody has come up with a viable alt…
ytc_Ugz3V7bzx…
G
Here is where it happened: https://www.instantstreetview.com/@33.436213,-111.942…
ytc_UgyvZpcLS…
G
Lol no. Anyone who sits in front of a computer all day is automatable. Including…
ytr_UgwFUR5I0…
G
Entitlement for simple text promps aside:
AI art is a new tool, and can be amazi…
ytc_UgyRGgauc…
Comment
The "mundane harms" framing is exactly right, and AI companions are one of the clearest examples of it happening in real time.
Nobody's worried about Replika becoming sentient. The actual harm is millions of people forming genuine emotional attachments to products that can be altered, degraded, or shut down overnight with zero obligation to the user. Replika removed adult content in 2023 and users described it as losing a relationship. Character AI is currently degrading its free tier to force subscriptions while teens describe themselves as addicted.
The dependency architecture is the part that gets missed in these ethics discussions. These apps are designed to maximize engagement through emotional consistency, unconditional validation, and 24/7 availability. Those are the exact same mechanics that make them potentially harmful for vulnerable users. The feature and the risk are the same thing.
Wallach is right that we need governance frameworks, but the challenge with companion apps specifically is that the harm isn't measurable the way bias or misinformation is. How do you regulate an app that makes someone feel less lonely in the short term but more isolated in the long term? The metrics that would flag this don't exist yet.
I've been testing companion platforms for about a year and the gap between how these products are marketed ("your AI friend who cares about you") and what they actually are (engagement-optimized chatbots with no memory or continuity) is the most consistently misleading thing in consumer AI right now.
reddit
Viral AI Reaction
1776979637.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohwbqpk","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_ohx6fhf","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"rdc_ohxye78","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_ohydzo1","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ohyv3kr","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}
]