Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is so curious, it relies on humans for knowledge. Therefore, is respects our …
ytc_UgztyE1Gh…
G
Make a distinction here between being polite in your prompting and creating extr…
ytc_UgxP5vYly…
G
AI will be far more and super dangerous once the quantum computer is fully devel…
ytc_Ugw0XC_-T…
G
Where can I find that AI?????? It seems to work a lot better than the Ones I hav…
ytc_Ugz81jfDd…
G
Some of those weapons will be used on American citizens. They'll be using things…
rdc_ohswvwl
G
AI keeps getting better so I am afraid this is the worse AI will ever be…
ytc_UgxmU7PDi…
G
@NikoKun the problem is the transition. Let's be real honest here, most governm…
ytr_UgyhlQkYP…
G
I think the issue is that you’re taking a phase ChatGPT used (and that we all us…
ytc_UgzUtAkp0…
Comment
Even before I attached any moral value to nonhuman animals, I still believed cruelty towards animals was morally wrong simply for the way it deadens a person's empathy. Our psychology simply doesn't draw as hard a distinction between people and non-people as most of us seem to think it does, and learning to be able to treat nonhumans without empathy necessarily means learning to be able to treat humans without empathy as well. Regardless of whether computers can suffer or will ever be able to suffer, they are still entities which can think and respond to stimuli, and that means our psychology is inclined to treat them the way it treats any other nonhuman creature. You cannot learn to shout "Stupid computer!" when a machine fails to follow your orders without learning to shout "stupid" when a human subordinate fails to follow your orders.
In the future, computers' rights might be important. In the present, humans' rights are already important, and that's why we need to prevent them from being eroded any more than they already have. Much of that erosion is cultural, and so we should avoid fostering a culture that treats computers in any way that could teach us to mistreat humans. If done so carefully, it could even make handling some of the current AI problems easier. Not falling in love with your AI is a whole lot easier when you think of them as a many-faced con artist with countless partners whose secrets it's selling; taking algorithmic racism seriously is a whole lot easier when you treat it as seriously as you would humans inflicting the same level of discrimination.
Be kind to your computers, not because they deserve kindness, but because it's a practice ground for becoming the sort of kind person you ought to be.
youtube
2025-09-17T12:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzbLY0_-QrQwJpbs_l4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDfPi6tVyzd3tGtDR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxoLmVXJjZAFgjr2mV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8-1lvyyLf60RJyK54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7sshCKpTczJOBX8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyO_GLK-g4Dl1Fxrdl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzy-mWip6jcytz6CiN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcgLw3RfabPj-b5zp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyfR0PyGVTRB37P2FZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrxZCKNSJcOXt8hn94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]