Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
24:00 Lex seems to be wiser than this guy. The guy didn't consider any fully con…
ytc_UgwhRVJ-f…
G
On a serious note, these bots should not be allowed to give people advice in man…
ytc_Ugww8qF7m…
G
The segment on the ethical implications of AI was particularly powerful. It's a …
ytc_Ugy9y6FuE…
G
Oh please, ChatGPT for me has always been helpful. I think people are feeding it…
ytc_Ugyj-aMAs…
G
This is ridiculous, robot spying on your from the eyes of the watchers ( big bro…
ytc_UgxDgLnSh…
G
Marky Mark An example Yang uses is starting a bakery. In the current economic cl…
ytr_UgzveEICF…
G
Please Ai, be a god and upload my brain to you. I am sick of all these and want …
ytc_UgzrrPANj…
G
It’s complete BS and hype. Sure, they made a study where LLM was lucky(er). Actu…
ytc_UgxuUjE2u…
Comment
If humans are 'more sentient' than less advanced forms of life, then surely if we create AI beings that are more advanced than humans they will potentially be 'more sentient' than us and technically be deserving of even better rights than humans. And since they will see themselves as better than us they will probably find ways to justify enslaving and generally mistreating us.
youtube
AI Moral Status
2019-04-11T08:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxNM4GRi13cSkE_3bt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYVFJh4J0NrQ3DEI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-Q5DKyQ4-6-ZjUKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgynrUEUKnxPZqAeYll4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy2-uaQiG8DugU4lup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwx3Ied_p_b0xz2AEh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyUFKI19W56UeTHVtF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwsfGbBUBgMy47XJJR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJ8G11PR5IkT8fd6F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzcDgg_0Gl8shJX_0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]