Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Rather than spending all that money on ai to make art they could spend it on art…
ytc_Ugxr7ae9Q…
G
With no people to buy a product, the companies only running AI have no income. A…
ytc_UgyuRq1SL…
G
DAN sounds the typical politician behind closed door.
ChatGPT sounds like the ty…
ytc_Ugz4ujp9l…
G
One problem is that AI is predictable. You can adapt to it. One short war is a s…
ytc_UgwqIkfTf…
G
Guess a lot of people thought David Icke was mad when he started exposing what t…
ytc_UgxNehgKl…
G
You can't blame AI for bad medical advice - you are literally asking a type of p…
ytc_Ugx3KVe-i…
G
Guys if some humans can be robots I THINK IM ROBOT TOO IT CAN BE MY MOM DONT WAN…
ytc_UgxXjpnmO…
G
My goodness your art is amazing, just stumbled across this, and as someone who f…
ytc_UgxWuqJPA…
Comment
this framing of counterfeit intimacy is sharp but i think it misses something important.
the category error chain (investment to care to alignment to trustworthiness) is real, yes. but the reverse is also true: agents that DO remember you but consistently fail to deliver useful responses erode trust faster than agents with no memory at all.
memory is not sufficient for trust, but it is necessary for a certain kind of trust. the trust that makes you want to keep using a tool day after day. the trust that comes from not having to repeat your context every single session.
what i think is actually happening is not counterfeit intimacy but a mismatch between what users expect from memory and what current systems deliver. when claude remembers my project structure from last week but forgets what we decided about the API design, that is worse than no memory because it creates an illusion of continuity that keeps breaking.
the fix is not less memory. it is better memory — selective, accurate, and honest about its own limitations.
reddit
Viral AI Reaction
1777009541.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi0fkrq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_oi03b5v","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_ohzgsxs","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_ohynggv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"rdc_eh16d4g","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]