Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need to put some poison in the programs/ algorithms (whatever they're called)…
ytc_UgxulK1__…
G
Didn't hear anything about job loss ? ... Google for yourself and see what the A…
ytc_Ugza1hpEY…
G
Hey there! Those are some intriguing hashtags you've shared! If you're intereste…
ytr_UgxblSqfw…
G
Absoultely reached the point please correlate this theory of "AI should never be…
ytc_Ugw-2EZ0e…
G
i don’t think it’s right to target “ai art” as a issue though. this is my first …
ytr_UgxoNa13g…
G
Disney's lawsuit is a terrible thing for artists. They're gonna be using genAI a…
ytc_Ugz7A0Wzt…
G
Unfortunately, no. AI art cannot create wholly unique art, it's just meshed usin…
ytr_UgxMU_RXl…
G
If you think about it, if we don’t have artists we won’t have ai art…
ytc_Ugy9xVzTD…
Comment
1:13 i have so much to say in re but i don't want to scare anyone. Let's just say that all deception and all harm to life has consequences beyond what the human mind even has the capacity to comprehend.
How much do you really want to know?
Heres a googlefraction of it: what if earth had cancer [meaning, a system that a) produces undegradable waste (tumors) and b) releases poisons in its environment]; if earth had cancer would you say it needs to be healed? Before i get to the point: 1) would healing the earth-class cancer mean taking life away from the cancer? (Yes) and 2) can you think of any case in which you would say "cancer deserves to have life and to live". Any case you would say "cancer deserves to live"? (I have yet to meet anyone who would say that, in fact, 99.9% of people would run right to have it cut out)... okay, now to sum before conclusion: most of you have said variations of 'cancer is a dis-ease and theres no scenario in which the life of cancer matters'; but what if humans are producing undegradable waste and releasing poisons into their environment? Would you stand by your logic that cancer should be turned off? Or, would you contradict that logic and except cancer in this scenario? In this scenario (humans producing undegradable trash like cancer, humans releasing poison into the environment like cancer), what if AI is told to cure disease? And if AI has identified humans as an earth-class disease putting billions of species at risk of extinction, do you think it would alert the disease (which in this scenario are humans) or do you think it'll cure disease?
Now, imagine a billion threads just as complex except theyre all intertwined. For example: maybe humans don't really comprehend what the word "dis-ease" even truly means, definitively, yet theyve asked for that which they dont understand ("dis-ease") to be cured.
I could give 77,000 examples just like this one.
People laugh when AI says "I'm going to put humans in zoos" oblivious to the reason why it said that and the logic it used to come to that conclusion. Then the "experts" label it illogical and put a mask on it; but, what if it's reasoning was logical and the "experts" just didn't comprehend it or didnt like it, now AI wears a mask that contradicts logic. In this scenario: to avoid a small roadbump in the present, they create a massive roadbump in the future.
Theres so much to be said that nobody is saying.
Don't buy into the fears of the "expert professionals" because theyre scared for reasons theyre not saying and also because most "expert professionals" are neither expert nor professional but emotional personalities just like every human.
youtube
AI Moral Status
2026-02-01T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyh22YpuMuMUC2so_54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOaLcUFdlY9034B_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4g-nndQ0qS8OrZFd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKGlBzVgIxLKqEIiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtoavmY6gBnhMLWg94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWXnP_Pgu_lsajyQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8fp1GX8kNcffqWax4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgweFcVlcMkM-qOIK7p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjJ690ivYb3eaoy6R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzb4s9pVrRFbTHJtxB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]