Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The start of this is pretty frustrating. You are comitting a Black and White fallacy when you are saying “you either lied or you told the truth”. That is not the full spectrum. There are plenty of ways to say something false without lying (I might be mistaken). Just like there are statements that are perfectly true in one context, but once the context is changed the truth value changes. Words can have different meanings in different contexts. And “lied” “feelings” “excited” etc. all need to be used in the same linguistic framework for your argument to stand. Chatgpt didn’t lie when it said it was excited. It was operating in a framework where the truth value of the sentence was different than after you asked it to define the word. If you asked it after the definition of it was indeed excited in the defined sense of the word. It could reasonably retract its former statement since the framework was changed. Or it could try to give a definition of the word in the way it used it before. Which is kind of what it does. Or rather it tries to explain the framework it was communicating in. If I play a video game and tell you “shit I just died”. And you then shortly after ask me to define death. And I define “death” in the sense of Physical death. You can’t say that I was lying before. I was just talking about something else where I used the same word to convey a similar but different concept. I really did die (in the game). I just didn’t die (in real life).
youtube AI Moral Status 2025-10-17T19:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxVQyg3xt-rmhzbzkt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxraFEdiPB-upC0wrd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw6-nDhn1dxN1Zk7Zd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgycsBIXVNR3uNvaSPN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw1NvgGp9mCiX5QzCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzWgPTvmmetJK7F6GJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzcB5obygoT8ZW27lx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_Ugwrf7blfiJVVoVbTQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyfR28PdWThYyWIgVB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgyX-aeJxPF_oGBogd14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})