Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry, WHERE in the video is "AI's first kill" ..?? O_O
Or, asked another way…
ytc_Ugwe7n6OI…
G
Im gonna order a shit ton of sex robots so that the robot workers eventually bec…
ytc_UgxpU3Ug_…
G
One thing i dont understand. In order for AI to innovate non stop, we would have…
ytc_UgxCDF_U1…
G
About the paralegals, it'll be the juniors and associates who'll be out; not the…
ytc_UgzmxtK5_…
G
AI’s changing the game, and I’m just glad AICarma’s got my back with their monit…
ytc_UgydjuZiL…
G
"Tesla Autopilot Crashes into Motorcycle Riders - Why?"
Because Elon despises t…
ytc_UgyKbfNww…
G
We are approaching the singularity and see/feel the shortening of our prediction…
ytc_UgzC8q4Dh…
G
We live and we die so be ready when the demons reveal themselves this year and b…
ytc_UgyPh8Kaj…
Comment
It seems like we're reaching the [Uncanny Valley](https://en.wikipedia.org/wiki/Uncanny_valley) of AI, where the artificial intelligence is becoming close enough to being human like that it's causing revulsion/hostility due to it's closeness to being human, but not quite reaching the mark. People have played video games where they do violent acts to virtual characters, but they know it is just a game and most people aren't overly concerned. The characters in the games can clearly exhibit behavior which players understand is programming, such as getting stuck on a low wall and continuing to run forward, etc. which causes morality to be shut off and treating the "enemy" in the game as a figment of reality or perhaps like a human may treat an annoying fly in real life.
In a competitive game, players have the knowledge that the enemies are being controlled by a human and they know that no physical harm will come to them if they destroy them in a virtual context (unlike non-player AI which is obviously "destroyed" or ceases to "live" when destroyed, even if the player knows they can reboot the game to re-create it.) I think the uncanny valley is being created mostly because people often text other real life humans and so the reality of the AI as a conscious being becomes plausible to them. This is really just a tragedy of less face to face contact in society.
If it continues to evolve, perhaps showing a face or allowing a video conference with the AI, then this uncanny valley will continue to increase up to the point where only those who truly "know" the entity they are conversing with is an AI will treat it differently from another human. Then there are problems with reverse-logic where people will recognize traits in living humans which resemble AI and will shut their morality down when dealing with actual humans, treating them like AI as well.
I think the only way humanity will survive this wave of AI is if devices are made for humans to share emotions/pain and oth
reddit
AI Moral Status
1676663602.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8y44v2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_j8yoa6y","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_j8z89xg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"rdc_j90rnaz","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"rdc_js2r2of","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]