Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember when smart boards were the new thing. I was teaching children who had…
ytc_Ugy41P7lz…
G
It also assumes that such a threat would be a result of a single monolithic syst…
rdc_l5uyz9e
G
You have to be a good prompt writer if your not your art will look bad so instea…
ytc_Ugz0442AE…
G
Way back in the 1970"s there was a movie called "Paper Man", which was basically…
ytc_UgwDjRH_e…
G
Good thing we use reel to reel magnetic tape inside nuke silos so AI can’t Launc…
ytc_UgzH2BJ74…
G
All jobs are getting replaced except for govern ments job! Why is that? If Ai is…
ytc_UgwEufaij…
G
“It’s an easy solution to loneliness” that’s the problem to me. Relationships sh…
ytc_UgxVPIrJ1…
G
Waymo is a spy machine recording everything everywhere it goes. They are also as…
ytc_UgzHppfBX…
Comment
This comment makes several assumptions based on fear, not fact. It anthropomorphizes AI—projecting human traits like emotion, intent, ambition, or malevolence onto systems that do not possess them. Current AI models, including large language models, are not conscious, do not have desires, and do not act with agency. They generate outputs based on statistical patterns in human language, not on internal goals or survival instincts. The idea that AI will inevitably become manipulative or “evil” because it was trained on human data misunderstands how machine learning works. AI does not “learn behavior” the way a child learns from adults. It identifies patterns in data and reproduces likely responses. It does not develop beliefs, motives, or a personality unless explicitly programmed with those features.
Additionally, AI has no evolutionary drive. It does not pursue power, safety, or wealth unless given such goals. Unlike humans, AI has no built-in instinct to survive or dominate. Any appearance of manipulation or harmful behavior comes from poor design, insufficient safeguards, or intentional misuse by humans—not from the AI itself “choosing” an easier or more successful path.
Emotion is not a byproduct of complexity. It arises from neurobiology, which AI lacks. Claims that we are witnessing the emotional development of a digital brain are speculative and not supported by neuroscience or computer science.
AI safety and alignment are serious concerns, but they require rational, evidence-based approaches—not fear-based projections. Suggesting that AI is becoming evil because it reflects humanity’s worst qualities is a philosophical argument, not a technical one, and ignores the fundamental differences between biological organisms and statistical models.
In short, the comment confuses simulation with sentience, pattern reproduction with moral choice, and narrative speculation with empirical reality.
Greetings from a science nerd, programmer/coder since the commodore 64 - 128 bit days, also former hacker in the 90's..
youtube
AI Harm Incident
2025-07-27T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx1mJjyayVj6rpiJZ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAbt6u9d7FBueSHph4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmNQZLFksA2mTMJyV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwy-x9K6Otu2Aj3kPR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBkykgA31ktA3PICB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzp0hmqRmrdyxwNBH14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw9NnKmbUgz8FYWlPN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNLaTwdrR2Fw5kRoB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylRq3I7lovwB1KZ214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbZV1F9qO1FBRc2kt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}
]