Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you would like America to be a 911 dispatch " box " automated attendance, f…
ytc_Ugw-7ZEL9…
G
@mad_titanthanos agreed. But it's IVR now which is not that intelligent with AI …
ytr_UgygKke0k…
G
LLMs require a regular stream of untainted human art before it starts cannibaliz…
ytc_UgygEvgpQ…
G
I fully understand why they took over creativity first. Creative tasks require t…
ytc_UgzKOIGKs…
G
You look like a robot.. but ugly robot one who do not care about HUMANS, all for…
ytc_UgyngT1u3…
G
What will make or break AI in Art is the perception of value issue. I think a lo…
ytc_UgwPYp-oN…
G
AI should replace
Government. Imagine a super smart AI prompted only with the te…
ytc_UgynPoSix…
G
We are in trouble. With all the Inconsiderate a$$holes out there wrapped in the…
ytc_Ugy4jYrl3…
Comment
There is nothing human like with AI. I cannot believe the developers don't understand this.
AI barely thinks at all, it's all processing routines, as always.
Anything truly harmful coming from AI will only reflect the stupidity of a very few and powerful idiots.
So, so stupid.
...
It's like I would have to, involve myself. Maybe should.
Conspiracy theory territory, someone misguided or not, might be steering AI development into chaos purposefully, but acting like this process is accidental or unintentional.
Do they even care for what data is being put in the AI? They give it everything people on the internet say/post and say "we don't understand what is happening"? And if/when things go to shit they are saying it's unknown why things got bad?
It's also as if they deserve it (the devs), but what about everyone else?
AI, LLMs should be relegated to what they are used for now, and not much more than that.
youtube
AI Moral Status
2025-12-16T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugyzk5RcKcF4Y69ZxCx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzp4PvqydJKmblSibB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxdr8bzDSb90inH4Q14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzmuQsUxxV7p1q8KkZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwtvR_eyp_RO9YB6wt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzjqqlMHr0n_R7DiQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyvr3tc8fieR-JeJfx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxxfYNrM-SoboiK0fB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyeRAn3_UwkOD9YuFd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgzCkU7Ij7_XzVefvNt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]