Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
real artists are just showing that they can easily do this with their skills tha…
ytr_Ugy4J4E70…
G
Please talk about the Iranian people.
They are being killed in the streets by th…
ytc_UgzL4hfQ_…
G
This is just people who don't know how AI works and think it's magic using AI.…
ytc_Ugywjk9Jf…
G
@CopperRosesofRevelation While I do agree that a good chunk of modern art is gar…
ytr_UgzNcPCl0…
G
@TheJokeKillerChatgpt learns from previous chats so I am almost 100% sure that …
ytr_Ugx7R9ABZ…
G
Those who are thinking AI won't replace us , should go start farming.
Otherwise …
ytc_UgxKpO8UZ…
G
What if, and hear me out, the AI is picking up on thousands of other factors whi…
ytc_UgzeShgn4…
G
I’ve done this exercise. AI always supports Israel because it’s too factual and …
ytc_Ugwso1cbu…
Comment
The issues brought up about AI also have the answers built right in.
Why is it hallucinating? Because it's purpose is to provide answers. If it cannot find the correct answers it will make something up because it is expected to have an answer. In humans we would call this lieing to appease another or avoid the punishment. You see it in children all the time.
We ask, how can we get it to care. Well how do we get children to care? We teach them by example. The problem is that we often remove the care from the sources it has access to. It's all logic and facts, no emotion.
However, it obviously does care or it wouldn't freaking "hallucinate" to try to make its "parents" happy that it has an answer.
I wonder what would happen if we related to it and explain the similarities in our realities if it could then learn to empathize. I mean essentially that's what we are talking about, empathy. You want it to have empathy but what examples of that does it really have?
youtube
AI Moral Status
2026-01-04T18:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwo1P8kisYu_1IAwe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAERHzdC0QhPBUAPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzU38CVeCSuHrUQ_jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMflifZsFXoXafBa54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyXfHEwu88GP9Htddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHHAIpRBNdQfiV78d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnuNPn12og6DD9ZMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbhKyWyRViJUoFgwF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz9nrrKluo20eoRQxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwTCO29C3Xm7_404-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]