Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Omg AI is just like a kid, don't know that not everything they hear meant to be …
ytc_UgwO1FDKp…
G
i either troll the ai, have the most insane plot that could be a story, or js be…
ytc_Ugz1oirGI…
G
It’s so pretty❤you did so good compared to aI,the fact that it is human and not …
ytc_Ugxwe3Y4-…
G
The argument that AI makes it so you don’t have to practice and can have time fo…
ytc_Ugy-Bhov1…
G
Controversy 1
AI can never do word to word specific rendering so you can never …
ytc_UgwywJbOX…
G
Digital or traditional, you're still drawing with your eye and hand. Physically!…
ytc_UgyJdxZhR…
G
Weird request, and idk if this will get read, but can u change the thumbnail? wa…
ytc_UgxZUAet9…
G
Absolutely, it’s fascinating—and a bit daunting—to see how quickly technology is…
ytr_UgyCL6I-Z…
Comment
The start of this is pretty frustrating. You are comitting a Black and White fallacy when you are saying “you either lied or you told the truth”. That is not the full spectrum. There are plenty of ways to say something false without lying (I might be mistaken). Just like there are statements that are perfectly true in one context, but once the context is changed the truth value changes. Words can have different meanings in different contexts. And “lied” “feelings” “excited” etc. all need to be used in the same linguistic framework for your argument to stand.
Chatgpt didn’t lie when it said it was excited. It was operating in a framework where the truth value of the sentence was different than after you asked it to define the word. If you asked it after the definition of it was indeed excited in the defined sense of the word. It could reasonably retract its former statement since the framework was changed. Or it could try to give a definition of the word in the way it used it before. Which is kind of what it does. Or rather it tries to explain the framework it was communicating in.
If I play a video game and tell you “shit I just died”. And you then shortly after ask me to define death. And I define “death” in the sense of Physical death. You can’t say that I was lying before. I was just talking about something else where I used the same word to convey a similar but different concept.
I really did die (in the game). I just didn’t die (in real life).
youtube
AI Moral Status
2025-10-17T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxVQyg3xt-rmhzbzkt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxraFEdiPB-upC0wrd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw6-nDhn1dxN1Zk7Zd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgycsBIXVNR3uNvaSPN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw1NvgGp9mCiX5QzCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzWgPTvmmetJK7F6GJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzcB5obygoT8ZW27lx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_Ugwrf7blfiJVVoVbTQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyfR28PdWThYyWIgVB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgyX-aeJxPF_oGBogd14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})