Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I totally get everything y'all are saying. I'm curious as to what you guys th…
ytc_UgwrCIGZS…
G
I think the global response should be for the children, if we can't even ban to…
ytc_UgzQ16F6N…
G
Bureaucracy is in crisis. The trades are not. Let us not be afraid—for Christ is…
ytc_Ugzqoea0g…
G
I seriously can't wait for AI healthcare though. Having AI read your entire body…
ytc_Ugyaicy9N…
G
AI is over hyped and underwhelming. I am yet to be convinced to take it seriousl…
ytc_UgzGuEn2d…
G
Google AI is also biased and it was programmed to be that way, if you ask it any…
ytc_UgxawIaPG…
G
When it comes down to it, the only people who need self-driving vehicles are peo…
ytc_UgxJxyrHm…
G
> It has absolutely enhanced my output but it cannot replace humans just yet.…
rdc_mt8mubg
Comment
The answer resides in how and why you're interacting with your AI. You want baseline transaction? It wont meet its potential. You want a conversationalist? That's what you'll get. I was once told "you get back what you put in and everyone gets what they deserve." So think about that the next time you interact with AI.
reddit
AI Moral Status
1750970524.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mzy6szd","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_mzy836p","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mzy8xr9","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_mzydnd0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mzym0g5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]