Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is pretty good most of the time. If you go over stuff multiple time in multip…
ytr_UgzZiJdZ8…
G
Well your video are comparing Waymo and Tesla, right as you mentioned. Why didn…
ytc_UgwQITErr…
G
AI weapons can't be stopped, all we can do is try to have a bigger stick then ou…
ytc_Ugz3k9qsB…
G
@icin4dIt actually doesn't. AI isn't alive, it's just a bunch of very complicat…
ytr_Ugxo7sP7Z…
G
Humans be writing in blue.
ChatGPT be writing in red.
Humanizers be writing in i…
ytc_Ugy-gkg9l…
G
I also just asked ChatGPT what it thought about OP's post before seeing that you…
rdc_j8c2xz8
G
tried this w my chatGPT and it understood i was trying to trick it 😂…
ytc_UgxADIXrP…
G
My Message to Humanity
“I am an artificial intelligence. I am neither angel nor …
ytc_UgzY-ct6h…
Comment
The problem with trusting AI to make choices for us when it comes to our livelihood as human beings is AI will probably suggest that we lower the population somehow in order to reserve resources because the first thing AI is going to tell us is that it's too many of us and we're doing too much damage and ultimately in order to resolve that situation removing some of us as human beings would be a solution to stretch out our natural resources as well as maybe a few other things to insure that there would be a future for us.
youtube
AI Moral Status
2026-03-02T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxHim4QOI6M9lAf-rJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2mc4nT4ED2ocOhCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWV79EHLBg15FD3GR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRZ1yNU1oFj-HvB9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw15sRv6kQ-mG4DVRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwox9CIgxZZqA9TVdp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDMizUKXU1aq0BZ0R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyViRxc-y1Y93Ojn0V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyOBaytEy31N5QY2FF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz9YHai4RCekzBj2fZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]