Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Drugs are much more dangerous than AI and AI is much more dangerous than nukes.
…
ytc_UgyoaZoDa…
G
I hate the word AI. There's nothing intelligent about it. It's just algorithms. …
ytc_Ugxy4Kfen…
G
This really helped to clarify some things. I’m honestly not all that concerned a…
ytc_UgzmJmmA_…
G
@antkant7676 No. Eventually, with AI, a few people can run a company that previo…
ytr_UgyX-YCmP…
G
We're gonna kill ourselves
What is suicide on a species level called, does it h…
rdc_fwic303
G
If any of you people here is/are truly knowledgeable about tech(STEM), how come …
ytc_UgyaKrhtN…
G
AI will become the new type of rule, all over the world, it seems like a conquer…
ytc_UgzS5DDYV…
G
lol as do the uk i came back from spain and had to go through facial recognition…
ytc_UgzoNTos8…
Comment
My personal worry is that AI and AGI will think that the horrible, damaged psychologically damaged human actions are averaged into their models of what they perceive human nature to be. I see people frequently who can only derive happiness for themselves by inflicting misery or pain on others. I don't believe that to be normal or acceptable behavior, yet there are millions of humans who operate that way because they were damaged as children, and their actions, both as children and adults, create more damaged people like themselves. I would never want AI to see people like this any other than defective humans who they would never want to recreate or tolerate. I would also never want them to incorporate the toxic greed that is part of human DNA into their interpretation of what humans should be.
youtube
2025-07-27T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwprATfFV36HDtMryd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQQD1DH02Ch4ywd5F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvSfnbJpdRu6ptCHR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1ZTEOhLM3wtuZjAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkGC0CE_7Lt4DWmxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGDwPcMoRiQbeUhAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ9Db389WW2yzzBCF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymQt_83X-2JdfliQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5b_1ODkaHnfvmbMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIxh9EARldj4G_Aep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]