Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think self driving cars will be more safer by 2035. Current technology hasn't …
ytc_UgyBkdfvL…
G
A shocking amount of people lose all ability to question sources the moment an L…
rdc_oa6k4x9
G
Oh not the AI BS.
Yeah, besides all the internal rumbling that is happening. The…
ytc_UgxKyPgaN…
G
But what if I put weeks into making a video using my creativity, but I use AI? T…
ytc_UgzqxRhDG…
G
Don't worry, the robots on our channel are all about sharing knowledge and learn…
ytr_Ugx73xzf_…
G
This is starting to sound like the depiction of mice in the Hitchikers Guide nov…
ytc_UgzDHZCcG…
G
We have more than enough evidence of the dangers to act now to develop safety pr…
ytc_Ugz0BsF2R…
G
So, I'm a software engineer, I use chatgpt to 'assist' in my work, but do I get …
ytc_Ugyw1LJF4…
Comment
Just a word of caution: Our superpower as humans (imo) is our ability to empathize with anything we see as reflecting back a bit of our humanity.
Ghost in The Shell is a story we made up! It only works because it tugs at our heartstrings by asking us to empathize with something that displays a noticeable *humanity*. And thus the empathy comes easy! And thus the story becomes good! This is the main reason you (and so many of us) still connect with the story.
It feels weird to me to use a human-made story to understand real AI, something which arises not to tug at our human empathy, but out of the much-less-sexy reality of statistical algorithms and ML techniques.
reddit
AI Moral Status
1749784575.0
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mxg663b","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mxi1v7q","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"rdc_mxg2tr7","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mxhv163","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mxi0zue","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]