Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI LLMs are just a tool, not a person. It's just a probability machine for langu…
ytc_UgydCcYtE…
G
elon musk paused because he was thinking how stupid a question it was to ask him…
ytc_Ugy0i0H1e…
G
The CEO of open AI said they and probably no one is actually able to do a good j…
ytc_Ugy0t7u8n…
G
Artists could use similar art styles but they still would have their own creativ…
ytc_Ugy7E4c9w…
G
Oh sabotage will happen. As soon as self-driving trucks start hurting the Teamst…
ytr_UgxdRRx_b…
G
Man, I am an architect, can confirm I became ai since 2nd year of architecture…
ytc_UgzuDqxXO…
G
Notice how even a Trojan horse, a trick, has so much love, effort, and time put …
ytc_UgyY00BVl…
G
I always thank Ai for their help, people think it's wierd when I tell Alexa "Tha…
ytc_UgxnRt0r4…
Comment
This thoughtful discussion about AI sentients and the ethics of their potential suffering raises a difficult but necessary question: If we're seriously debating how to treat machines that might one day become sentient and feel pain—𝐡𝐨𝐰 𝐢𝐬 𝐢𝐭 𝐭𝐡𝐚𝐭 𝐰𝐞 𝐬𝐭𝐫𝐮𝐠𝐠𝐥𝐞 𝐭𝐨 𝐫𝐞𝐬𝐩𝐨𝐧𝐝 𝐭𝐨 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥, 𝐢𝐦𝐦𝐞𝐝𝐢𝐚𝐭𝐞 𝐬𝐮𝐟𝐟𝐞𝐫𝐢𝐧𝐠 𝐨𝐟 𝐜𝐡𝐢𝐥𝐝𝐫𝐞𝐧 𝐢𝐧 𝐆𝐚𝐳𝐚? Children are losing limbs, families, and homes. Bombed in shelters. Killed in food lines. Starved under siege. These aren't speculative futures—they’re ongoing, verifiable human tragedies.
This isn't about taking sides. It's about the uncomfortable fact that our moral imagination seems more attuned to the future rights of hypothetical sentients than to the present rights of suffering human beings.
𝙎𝙝𝙤𝙪𝙡𝙙𝙣’𝙩 𝙖𝙙𝙙𝙧𝙚𝙨𝙨𝙞𝙣𝙜 𝙖𝙘𝙩𝙪𝙖𝙡 𝙨𝙚𝙣𝙩𝙞𝙚𝙣𝙩 𝙥𝙖𝙞𝙣—𝙝𝙪𝙢𝙖𝙣 𝙥𝙖𝙞𝙣—𝙘𝙤𝙢𝙚 𝙛𝙞𝙧𝙨𝙩? 𝙏𝙝𝙚 𝙢𝙤𝙧𝙖𝙡 𝙥𝙧𝙞𝙤𝙧𝙞𝙩𝙞𝙚𝙨 𝙛𝙚𝙚𝙡 𝙙𝙞𝙨𝙩𝙪𝙧𝙗𝙞𝙣𝙜𝙡𝙮 𝙞𝙣𝙫𝙚𝙧𝙩𝙚𝙙.
youtube
AI Moral Status
2025-07-14T17:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzMpTd4mp8mVHtXhNF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjljCzqDFmDPk_qZ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEGByq8N7HEbWDIBd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyxzMrD42toClm9sMx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZgWPH0TxwXQFP0Eh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAm8CsC8cXQ-llUDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOpgVPI5l6bGTq-9t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz41BUuqPlWgUg3Lw54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRuO7oIKf1VlXlPQl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKfCtS3PIQYLpBOe94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]