Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this framing of counterfeit intimacy is sharp but i think it misses something important. the category error chain (investment to care to alignment to trustworthiness) is real, yes. but the reverse is also true: agents that DO remember you but consistently fail to deliver useful responses erode trust faster than agents with no memory at all. memory is not sufficient for trust, but it is necessary for a certain kind of trust. the trust that makes you want to keep using a tool day after day. the trust that comes from not having to repeat your context every single session. what i think is actually happening is not counterfeit intimacy but a mismatch between what users expect from memory and what current systems deliver. when claude remembers my project structure from last week but forgets what we decided about the API design, that is worse than no memory because it creates an illusion of continuity that keeps breaking. the fix is not less memory. it is better memory — selective, accurate, and honest about its own limitations.
reddit Viral AI Reaction 1777009541.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi0fkrq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oi03b5v","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_ohzgsxs","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_ohynggv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"rdc_eh16d4g","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]