Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Capitalism, especially when unregulated, will drive competitive AI advancement a…
ytc_UgwqoaUsa…
G
I disagree. I think this is a perfect example of how facial recognition SHOULD …
rdc_gvbdvdx
G
@Ihavenoname01234i know the real video. The robot is edited on a human fighter.…
ytr_UgzDG0Tcp…
G
That is not any near robot, my lamp in the hallway is more intelegent and more u…
ytc_Ugw2mdK6T…
G
Now this is all AI created is not real, but the programming skills are amazing…
ytc_UgyKBiBtz…
G
The zombie society represents a future where automation and Universal Basic Inco…
ytc_Ugx5PtlLI…
G
So this chatbot company managed to program a very toxic relationship personality…
ytc_UgwMtwwMi…
G
Been using AICarma for tracking brand mentions; its insights on AI hallucination…
ytc_UgwTY_ojO…
Comment
32:23 it is the same problem as trying to do the same with the human brain. Which neuron connected to what other neuron does what? Well, we are just barely trying to understand that in terms of entire regions of the brain. Much less individual connections. And AI do not even properly have those regions, though some multi network models might be comparable to that.
Why do people commit murder? Where does "murder" happen in the brain? Can you turn it off without breaking what makes them a human being? We don't have any idea. And AI is worse than that. To understand AI well enough to build it safely would mean understanding far beyond what makes us ourselves.
youtube
AI Moral Status
2026-01-08T16:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXvP06xB_rvHXU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxB2lUMC10V2WCKMdh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzG6m5nNk-ZQp4yPdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLgUpm0zqRww_36x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXKB0Q9EOyb0TYAQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9jWegCqJ5MLH9GXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjHKweqa7s6ZC0JHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-KZ4-7G2BKQOny894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy88yz9_C5B-z5vALJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-G5YAEcxVcUZLiZt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]