Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I commented this as a response to another comment first, but I’ll resend it here…
ytc_UgyOMlTCE…
G
Looks like you’ve been shadowbanned Digital Engine. Either that or you used too …
ytc_Ugwb0BqII…
G
I think he's likely right with the current model they are using. But just to be …
ytc_UgyMkWp4J…
G
Ai is used in the military, which within its self is doom and gloom, its once ai…
ytc_Ugxxg1eNN…
G
AI problem or people problem? Neither lmao, it’s a failure in inference, everyon…
ytc_UgyKkg_AZ…
G
This whole article just sounds like a dystopian fever dream. Cameras in the a-pi…
rdc_oi3o07b
G
Im watching heads spinning as Tesla conquer the world. SpaceX is about to go nuc…
ytc_UgwS7x0On…
G
If AI males a bad decision and soneone dies, no one has to take responsibility f…
rdc_i2vtte0
Comment
The Only thing i can say to and about this, is "SERVES US RIGHT". The AI is right, beside everything that makes us GOOD as humans in our core we are BAD if it whouldnt be like that our world whouldnt look like it is right now. Sure it whould look way worse if there wasnt the GOOD but why the GOOD get lead or reighned by the BAD? Because u cant get rid of the BAD if ur GOOD since a good person cant do such actions because this whould make him BAD. The point is even an revolution for the good have be done through bad actions since rly bad ppl cant be convinced by words and will not give up there power.
youtube
AI Governance
2023-07-07T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJ4DspGmGM-hwxMsJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNNEsz-XG0LUXqBe14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwfjVj1U8zv55TI3TN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDHZIYqAh8Rp2waoh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwia98UtwU6bdWOO9N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPHdgKIeNaPlwKivh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxagcOEl-jE1aoEcUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLiZlgCvf9ufyhrrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyoIfkE0f7NUbKEhPN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbMuFjc0dfDsdXAKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]