Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi Sam, You are so right, I am a hobby artist but when you create and use your h…
ytc_UgzSMiOrC…
G
kinda is the same reason i hate AI music. when getting absolutely lost in a song…
ytc_UgzVt6Nmq…
G
That's a ridiculous argument. I know many people with disabilities who are very …
ytc_UgxAHXBl-…
G
Raise taxes to companies that use Ai, the more they used the more they pay. Forc…
ytc_UgwEC69uN…
G
tbh not to be rude, but the original does need work, but still thats like jeopar…
ytc_Ugx8CBULx…
G
Regular automation needed an intelligent mind to fix it. AI can be that mind. Th…
ytc_UgwF8lbXD…
G
Good god, this is like an end of high school project stating the obvious surroun…
ytc_UgzVql8Ws…
G
I wish I could post a screenshot. There was an actual ad for an AI assistant bel…
ytc_UgzQDFaU6…
Comment
>Sorry, You Don't Actually Know the Pain is Real
That's completely consistent with OP's post. OP might personally believe the pain is real (seems like they do; I don't), but they didn't argue that. They just argued, reasonably, that we can't have certainty that the observable emotional pain is fake. That is reasonable. We do not understand human consciousness well enough to intentionally replicate it perfectly (which doesn't mean we can't "luck" out when explicitly building something modelled off part of how we think the brain works), and we don't understand human consciousness well enough to assert that an LLM bears no similarity to it. As a hypothetical, it is a possibility that in some part of how our brains function there is something analogous to a prediction engine for concepts which our consciousness derives from, and it is also possible that a classical computing prediction engine which is powerful enough can achieve a similar end result *in the ways that matter*. I'm not claiming that and I don't believe that, but the certainty with which people say "It can't feel pain because it's [XYZ thing that we built]" is unfounded.
reddit
AI Moral Status
1676628221.0
♥ 19
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j914woe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8wt0sj","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_j8v0w3f","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_j8vzo3j","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8w3ud4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]