Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This makes no sense. Neither private nor public school teaches you how to be eit…
ytr_UgyT7z9SK…
G
This also explains why so many AI companies ‘donated’ to Trump’s campaigns, and …
ytc_Ugzt-4Lqd…
G
Yall actin like ai straight up steals art when all it does is collect research f…
ytc_Ugy8zIDJR…
G
How can a chat bot have its own thoughts wtf? Makes no sense. And I've ve heard …
ytc_UgwUKbDSK…
G
The only thing that really bugs me beside the art theft is the fact that text AI…
ytc_UgwIDWsu-…
G
I used to like the podcast and the clips but I no longer like its click baity ti…
ytc_UgwhiZjqy…
G
That hurts! for someone who draw for shit. The punchline stung a bit. 😄
Although…
ytc_UgwIKKhSy…
G
Thank you for your comment! If you're interested in seeing more interactions wit…
ytr_UgzifFkYc…
Comment
James Gleick reports in The NY Review Vol. LXXII No. 12, in The Lie of AI, mentioning an article published in 2021, written for the ACM, called, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" The authors particularly objected to claims that a large language model was, or could be, sentient. The kicker is that two of the coauthors led the Ethical AI team at Google. Google ordered them to remove their names from the article. They refused and resigned or were fired. This shows that AI makers encourage this myth (and some of them may believe it themselves). There is no real controversy. Machines and software will never be sentient. Only crackpots claim that.
youtube
AI Moral Status
2025-07-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzBKKpk66maK-nV1bF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwi8KnHOOw_GQGscA14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxdWwGEfRMPhDYMkuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx3JADeD_wcgYaYdL94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLYj1v_ngVTYuVweh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwsM8WjGeBz2kBU50h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9ptTtz4cih9ZrCMR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrKRLPXW84H5KkrEt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyJ4Z5MJGJWuJm6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwt2q2eTSbitIqx-7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]