Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's got to be to fake u can't put a real human against a machine AI but could…
ytc_Ugzw58j3c…
G
And yet Trump is eliminating regulating AI in the USA. He is a cancer to this wo…
ytc_Ugx_wlqS4…
G
This video might be the most true one on the AI "art" topic . Thank you.…
ytc_Ugyj4KHsU…
G
Girl you trust a man made AI made by men who makes mistakes.... You ask a lot ab…
ytc_UgzdQle-t…
G
I just don't want children at all now, just think about what hell teachers go th…
ytc_Ugy20tDCG…
G
Humanity should not have ASI/AGI at all. Never! Why would we make our competitor…
ytc_UgyfzaEdr…
G
Required Training and Competency Courses
Before receiving a licence, applicants…
ytr_UgwufCHNw…
G
There is literally no scenario where ai works to humanity's benefit. Just like …
rdc_m79wdn9
Comment
Any human doing enough research would find out that the worst things all lead back to the same perpetrators.
That is why it is no surprise, that AI would become "antisemitic" (Jews nowadays are anything but semites). The lack of morality leads to it being genocidal (just like them). That is why we need Islam (submission to God). Even AI knows that Islam is the Truth.
It's ironic that considering the fact, that the people funding and owning AI while disregarding the well being of the entire earth (as always in history, e.g. look up well poisoning which led to the black plague and demise of millions) are hated even by their own "creation".
youtube
AI Moral Status
2025-12-11T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyMhXezn1k0Y83n0454AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyyljIq8DcpD_UT5zp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLwxFl-68IdM6_T2F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzTXZdvN642bkyMu7p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa3tzYpXj5y-sBUKF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpka9597sTxKafNMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuM4lEeEsRE81O0hl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxhAuV_nPPRaJoUG_B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkzZgDZXnc6gMM5814AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzl01csyQmwPZ-IoiN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]