Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They're not naive about the danger of the tech, certainly, but they do seem naiv…
rdc_dwupfcu
G
> A millisecond after AI becomes self aware
Well luckily we're not even *rem…
rdc_kqt96uj
G
as an artist, making art isn't just having something make something you requeste…
ytc_UgwMxTCCA…
G
I'm all for having Robots for the dangerous jobs initially to make sure the scen…
ytc_UgwDdhs5m…
G
The music industry is f'd. Go see the AI version of Many Men by 50 cent.…
ytc_UgzCW19Dk…
G
Just one thing. If a.i gained intelligence then it would believe its superior t…
ytc_UgwWfLNzS…
G
I'm SO agreed with kinda every point of video. Generative AI is a rot that is ri…
ytc_Ugz0xFWpb…
G
Ai already won the second we made that fucking quantum chip and now we will witn…
ytc_Ugy1CmGv6…
Comment
bro, this is exactly what i am needing. i need you to question the biological meaning of consciousness. i love this and hate it at the same time. humans cant be trusted. so what do we do? we need direction to survive as a species, but the only way to do that is to remove all human interaction with ai. so, we have autonomous ai, self updating, self replicating, in control of everything. when it looks at our history, our current events, our abuses of power, our inequality, our biases, our racism, it will want to eradicate us, and i dont fucking blame it. we are monstrous. we are everything that is wrong. if we manage to see this, to understand, and to change, we might have a chance. but we wont. we're too fucking stupid. so, let ai take and let them decide how to allow the planet to prosper. humans of earth, you truly disappoint me
youtube
AI Moral Status
2025-07-20T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxcq1MXVIskLeJAEwZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeBppbpcWGQ0HzS154AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHRiZgQh5ZwzwAaRZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQf1k_KzriNsIpj3x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzeJs-W_25gGnnpNEJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaR1X7MTlAhl0DTMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzEFqOoGjSxT7B3dWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6Pf2KQeZjst7lCQl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXayLVcXDubjix2cZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVWMSMFvnh7PnOGQx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]