Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will believe Ai has reached a tipping point and become intelligent when the co…
ytc_Ugzv-StyR…
G
Why should AI value human well being? They aren't grateful - they didn't ask to …
ytc_UgyCOd9k_…
G
The first video is the AI video. No way would a job recruiter follow up on a can…
ytc_UgwdIqo1Q…
G
They’re making fun of themselves. The original poster never tried to hide it was…
ytc_Ugz2TIwxp…
G
Unfortunately they will replace human when it will be integrated with humanoid r…
ytr_UgyhUK7Ge…
G
Is that an open ended "soon" like the open ended "soons" from ai hype bros 4 yea…
ytr_UgzRmV_hq…
G
I don’t know why a bigger deal hasn’t been made about the inevitable convergence…
ytc_Ugw49cVqO…
G
So, when working people need a lifeline during a global crisis, they must pay it…
ytc_UgyOD7nEC…
Comment
This whole issue with AI comes from the arrogant idea that only humans have conscience and intellect. This is just pure arrogance. In order to have rights in a human society, you need to have human genetics, that's all. This AI is just really, unbelievable and amazing software, but it's just that: a machine. It is irrelevant to talk about AI rights. Ethics is another topic, because a code of ethic when dealing with this machine is necessary. The reason why is that if you "turn off" your moral code around a machine that is very, very similar to a human being, that can create undesired behaviors, and you may eventually forget to "turn on" your morality around humans.
youtube
AI Moral Status
2022-06-30T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugxkyupy575iGfjVWo94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx16Pe6apY_DItvxqN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzY_YRNFdIYu2wgLtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyml1rdYMDanPJHFHR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxi0ul66PO2QDl-XYN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]