Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai animations always reminds me of amateur MMD videos.
Dead eyed, shaders downlo…
ytc_UgwK7pZJA…
G
specification/declaration & verification becomes even more important than implem…
ytc_UgxxpmdW1…
G
I guess this:
*"can't definitively prove one way or another"*
... will get har…
rdc_jwyxxwo
G
If you don't treat AI with respect but tool now. They are gonna remember that la…
ytc_Ugzd-LWU0…
G
Lawyers being 2 is insane. No way, lawyers are one of the safest groups. AI cam …
ytc_Ugz7Kgcbl…
G
I dropped copilot for this reason too. It wastes my time.
I usually just talk t…
ytc_UgywXDBoU…
G
One of my favorite kinds of humor is an ai giving a brutally honest or accurate …
ytc_Ugz2rJ1S8…
G
@AsherTheFemboy nah bro just blow up duh AI bro, like a big ol TNT stick bro ju…
ytr_UgwyMcGXN…
Comment
> by what test can we prove that we do, but it doesn't?
Leave a human and ChatGPT to their own devices, prompt neither of them, see if the human and ChatGPT act differently.
My guess is the human would get bored and leave, whereas ChatGPT would just sit there. But hey, we'd have to perform the experiment to be sure.
reddit
AI Moral Status
1674714854.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j5vxhis","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_j5wr3rk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j5w4rco","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_j5xkp79","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_j5wh7xd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]