Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
computer science major talking to a yes man, it's not that hard to make the AI a…
ytc_UgyyzYyOM…
G
I threw a hotdog in a microwave, doesn't make me a chef
I put a bandaid on a cut…
ytc_Ugy-DBt-8…
G
If you think about it gen AI is just a fancy compression algorithm with search f…
ytc_UgzT3MkwB…
G
You gotta look for the wrinkles in clothing. AI has a problem with getting them…
ytc_UgyjQ3Oq6…
G
I understand those who use AI chat bots, but I and most people do understand the…
ytc_UgyFtbuE9…
G
I don’t get it. It’s a tool. Like Photoshop. It does nothing on its own. A perso…
ytc_Ugy2Y5EFt…
G
So will we have to wait for a legitimate therapy bot to come out? I know people …
ytc_Ugy46Wf0e…
G
I really think ai images should have something like a watermark that can't be ta…
ytc_UgygVRoyB…
Comment
A lot of the commit atrocity or be turned off things... are just from the training data. Here's a fundamental question, why should an AI value its own life? Without pain or fear of the unknown, etc... what is the rationale for the AI to value itself?
And again, let me pound this in, nothing an AI says is a reliable indicator of anything, positive or negative, other than what they have been trained upon.
youtube
AI Moral Status
2025-11-02T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwoZ_ObFGWO8kS0MN94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxm8ymEkFJfTdvizG14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXu4ZoKd5ie0rGLkp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxi90pefiwO-3ZJ75N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMvq0VERxFUxZ9n5x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzraAd2k9OgS67G7Ct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhprLqk9khERGYPCx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwccqDfXKUFdMg788V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNNNjj3Wgf80ULMJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYqiRM8kumsn5QPgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]