Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am seriously waiting for the day that the proof you are not a robot CAPTCHA wi…
ytc_UgykpQhtI…
G
Why do you think that a society that only produces entrepreneurs would be succes…
ytr_UgyHUm0a2…
G
Do we want A.I. to be humanlike, or better than human? Grok, it seems, is alread…
ytc_UgxFwL1R0…
G
Why did you mention "Iran" as working on AI? If history has taught us anything,…
ytc_UgyCix13Q…
G
AI can't be used for anything that requires 100% accuracy ( deterministic ), exa…
ytc_UgzGEucBK…
G
Hey @pedrocardoso5863, thanks for your comment! I appreciate your thoughts on "O…
ytr_Ugx84PmT5…
G
AI is fire. And burn your house down or heat your house. Like a baseball bat. …
ytc_UgxBCyjVt…
G
Proposal to change title to:
That one time the internet made the perfect AI waif…
ytc_UgxZI2zK4…
Comment
The Gewirthian framing is interesting but it front-loads a controversial premise. Gewirth's PGC derives obligations from the necessary conditions of agency itself, which means the argument only lands if ASI meets his criteria for purposive action. That's doing a lot of work quietly.
The semiotic problem you mention seems like the stronger original contribution. If we lack the conceptual vocabulary to correctly describe ASI agency, then both alignment and containment are solutions to a problem we haven't correctly stated yet. That's a genuine prior issue that most AI safety discourse sidesteps.
reddit
AI Moral Status
1775208597.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oe4apgm","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe1c25i","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"rdc_oe7mbdf","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe7rqc3","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"rdc_oe1ivlw","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]