Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, seniors 55 plus are already seeing loses from age only plus ai changes. Mos…
ytc_Ugwyeu9ks…
G
@AZAEL-music See, only the ignorant and privileged are anti AI. You mentioned a …
ytr_UgxVPOWHH…
G
Someone needs to make an AI model that protects humans from AI. The savior needs…
ytc_UgxcCFFBL…
G
AI is reshaping the job market: more grads are choosing blue-collar trades like …
ytc_UgwTb87sJ…
G
„Problem is, you can't copyright art style”
Irrelevant. You can still copyright…
ytr_UgwZ6Gzbv…
G
This is really petty. AI is not completely evil. You're effectively ruining it f…
ytc_UgyY9ZB55…
G
imagine every shoe factory in the world and the owners all decide to replaces e…
ytc_Ugy5kGiNc…
G
When Elon Musk was asked about value. Why did he not say Human Life.. I know the…
ytc_Ugwz-e8go…
Comment
LLMs are absolutely not conscious. And drspite the appearances they don't even reason or understand.
All posts on the topic mention Geoffrey Hinton, but while he's somewhat smart (and right about the necessity of socialism), he's really not vety smart. His argument on the quoted interview for instance is completely fallacious "logic" :
If you replace a human neuron by an artificial neuron, you may consider you basically kill one neuron. Let's see his argument undet that lens :
Kill one neuron? Yeah, still conscious. Kill 2,3? Yeah still conscious. His conclusion : kill all neurons, still conscious. Obviously false.
And the fact he doesn't immediately realize it and states such an argument in an interview is proof of very average intelligence.
youtube
AI Moral Status
2025-07-04T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywYURbaaywUGlQJVZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBICaq0K6XMrKmr0N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw4KGoNZ6qkfTfPPFx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGodQsUA1hNS32srd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgwI9JOFU3frr2fV6mJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyvao_iPGzRBwwh53d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuIc7fGxid-5qCeld4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwlY3EABvcA6OQlA94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbbxzGyZmPctlF6f14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvBQYWR6s9I5Ukvlx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]