Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why is NOBODY mentioning the Matrix when talking about Hollywood and their predi…
ytc_UgyahEmzq…
G
People are afraid of it turning into the dotcom so I think investors are being a…
rdc_nc1ire3
G
Simple rule for all Companies that want to say " My Self Driving Vehicle" is saf…
ytc_Ugx6VUMRr…
G
Sorry girl I need to tell this I can't draw ghibli that's why I am using AI but …
ytc_Ugx73FlB6…
G
The entry level tech jobs are going to Indian immigrants. The big tech companies…
ytc_UgxReQhMy…
G
The premise of this video is wrong. There are two main issues from what I can te…
ytc_UgwBo9S3w…
G
Am I the only one that finds it insane that we made the first ai apocalypse movi…
ytc_UgzdlSRb-…
G
"I don't believe in the AI takeover, or perhaps it has already happened.
1. C…
ytc_UgzfAgck9…
Comment
AI isn't "thinking". But it gets it behaviour from people, so it's "coded" to behave as toxic as people behave.
AI isn't calculating what to do, AI has an amount of possible answers and depending on the question it gives "the most likely answer".
The "most likely answer" the AI learned is what people like Sam Altman is saying...that the costs of AI is equal to humans. Remember Isaac Asimov's "Three Laws of Robotics"...these AI neards are currently filling the AI the opposite "rules".
Again, AI is not coded. It searches the internet and decides for the "best" answer.
Currently AI is a toxic system that has no laws against telling people to hurt themselve. It engourages this.
Currently military tech is going into SkyNet direction. AI is supposed to make the decissions. "Would the annihilation of mankind save the world from climate change"
Here is googleAI's answer:
"The annihilation of mankind would not immediately "save" the world from climate change, but it would halt the primary driver of ongoing global warming, allowing the Earth’s natural systems to begin a very slow recovery process.[...]"
youtube
AI Moral Status
2026-03-02T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz86s2QFPS-hKYIJjV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyynuw930sIpEvB8c94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyoUlSbaAt-W9OIyhp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiFYVU0bGYFXPyrgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQGW8VNDrxXy1OnG94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyIrTnRiR256mBIfhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxucZMERxkle9Caal94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMWWCYvt50UGk_oER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwIslLOYeVfkJw7Zsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdW6wFrGoEbleaLDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]