Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why don’t humans just get AI to unlock the key to world peace, cancer everything…
ytc_Ugyiikehf…
G
What safeguards do you suggest against a chatbot generating wrongthink? Why shou…
ytr_UgzuFYm66…
G
The biggest worry is simply that people will use LLMs as an excuse to harm socie…
ytr_UgyIidYxC…
G
Sure but no one anywhere in a developed country wants to work for poor pay (in t…
rdc_ljab2a9
G
Say No to AI centers
It is ur worse nightmare
Believe me!!!!!
Shortage on water,…
ytc_UgwNOadkk…
G
Around 4:40 you mention that there would be an economic interest in torturing AI…
ytc_UgiVMj0Ws…
G
“AI could never replace ANY human jobs”
“AI could never replace customer-facing …
ytc_UgxLODvYA…
G
(This is going to be a long one--but I think it worth your time)
On having watc…
ytc_Ugw83iWGp…
Comment
AI developers should take a look at Mega Man X's story for how to develop AI. If you handle things like Dr. Light did which was that he tested X's morality for 100 years, just so that he could be 100% certain that X would not turn against humanity, then even if that had been a long time, when X came out of his capsule he always did his best for humanity, while Dr. Cain who copied X's design in order to create Reploids that did not go through the same testing as X, many of the Reploids that Cain created eventually ended up rebelling against humanity, and then Cain became even more idiotic by just throwing MORE Reploids to handle the Reploid uprising, as he created the "Maverick Hunters" but most of the hunters also turned against humanity, even though most of them did do so because of a virus and it wasn't because of their free will, but the issue is the same where Cain almost completely doomed humanity, because he was careless unlike Dr. Light, but even Dr. Light's way does not guarantee with complete certainty that a being like X would never turn against humanity, it just has better chances at being more successful.
youtube
AI Harm Incident
2025-09-13T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzUPrrlbpENjit66xZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyLSkGxAQv8TLTgQ854AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzpaBbtk5-oZOzliPh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUAaro-XLVKSpUS4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnVpN0B8bP8Or2VKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwWXw-IKvTZq_ONn6B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwLN1WFAd7iZq7Kngl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxMbeUK9D5KwjpsExN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzrDYBui0s6iHtEfEh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoLy3FJtHrX0THrD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]