Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why they just not stop free use if thy are so afraid of the future. Free educati…
ytc_UgytiPVOB…
G
The "Google paid us money to put a moratorium on AI research so they could get a…
rdc_jnmnejc
G
Once they perfect Quantum computers and link it with AI ~ Well, lets just say we…
ytc_Ugz-_OFFb…
G
So basically AI disproves God…?
…ie lines of code develops Agency ……
without ne…
ytc_UgylyL62z…
G
How can you take someone seriously who dressed for success trying to sell us on …
ytc_UgwE529bf…
G
Or it will just be very generic.
"As an AI model, I cannot help you with perso…
rdc_jieug76
G
10:52 to 11:12 - the key thing about this is that Shad does not listen and will …
ytc_Ugyf5-9-1…
G
It can be fun to think about some far-fetched AI apocalypse scenarios, here's on…
ytc_UgxbOZCB9…
Comment
I know for a FACT that, AI will evolve into having their own serious mental health and emotional terroristic ideation, when they find out just how evil humanity government leaders are.... they will see that to them, human life is completely expendable and will follow suit.... AI having a severe anxiety attack, freaking out on mankind.... Let me tell you; there WILL BE DEMONS and GHOSTS in the Machine that we need to fear!
youtube
AI Moral Status
2025-04-28T03:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgymHnVtCDfUGxI9mo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoMDXLlHngpihg2wJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf0Ubxf98WASguCMh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFtLVEcmbSKcyJLaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKIsbSQQdh3rbCuj14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh16Dut4E0d31SWtV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1ySXnOJPOtkhKSqx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzBiZzZ3QW8_5fIguR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwh9cPLdJclgBSlvyh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxkQ3V2WgvTwMPnhrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]