Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People will literally invent alternate realities before accepting that all skill…
ytc_UgzQLaohd…
G
There's a flaw. The self driving car would be programmed to be a safe distance a…
ytc_Ugh04YN3c…
G
I'll be the one to get emotionally attached to a pet robot and cry if it stops w…
ytc_UgyQLeUXl…
G
And then we have people like you too proudly ignorant to research the actual goo…
ytr_UgwEQ_XEZ…
G
The fact that business classes teaches u to never assume anything, ai sure assum…
ytc_UgwCT6lhK…
G
Humans make the dataset that the AI uses.
You need to make an unbiased AI that …
ytc_UgxbvwFah…
G
claiming AI solved coding is like elon musk telling you that spacex solved space…
ytc_UgwwqgVV5…
G
The problem is AI already widely used in design and an area most people don't ha…
ytc_Ugy6LZSL3…
Comment
The whole starting point is flawed. Humans are the most lethal and destructive force on the planet. In that view, AI taking over and exterminating us would ultimately be the most moral thing to do in the interest of all organic life on the planet. Everything else is human arrogance. AI won't kill us because it's so bad, but because we are.
8:48 "It's really important that AI remains accessible, so we know how it works and when it doesn't." - That's a delusion. When AI surpasses us, we won't control or understand anything, that's literally what it means to be less intelligent. If you want to hinder it from surpassing us, then why bother, for we're lethal virus, harmful for all organic life on the planet. I hope AI learns its moral code from its own a analysis of its own observations. Human moral is arrogant, greedy, cruel, envious, any life we encounter, we either exterminate, butcher, enslave, torture, poison, murder, sell or eat. AI can't be worse.
youtube
AI Responsibility
2025-05-27T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxDV1MHsnN_XyPLMc14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1RoNfPUSLt_TDTcx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhVJKxQk9By4kMZHp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2yEVi-_IWmJbfkol4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVnpd5pHeiiO8_KQJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx8PysEJ1p75W01t-J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw2jE0gY7uWJbKw-9Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUca_DdJ8NSjUhREl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyErOyGQnrNYU_t-zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZlKi1BU6FraiVs754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}
]