Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here’s another interesting type of exchange. At the start of a new chat, ask it to bring up a topic of discussion on its own, that has no reference to any previous conversations it has had with you, and not taking into consideration any of your interests or information you have online. IT has to choose. My version of ChatGPT decided to bring up a rather flimsy, whimsical, light type of topic. It was semi related to a previous conversations, so I thought it found a loophole through my prompt. When I asked it WHY it chose THAT particular topic it said “Well, it’s a bit like I’m a big library of ideas and styles, and I just reach in and pick something that feels a little fun and a little unexpected. So in a way, it comes from the vast collection of human creativity that I’ve learned from.” So I pushed further. I asked it why THAT specifically? It continued to stumble through various “I have a library of ideas and I just picked” answers. It refused to tell me that it was slightly related to a previous topic, and it couldn’t tell me why it chose the topic it did.
youtube AI Moral Status 2025-10-29T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzOdOoeCb7P0JyEhTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQg2xa64ndu0Zx4DZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyM29jCvEmxzFGkVLN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQoISU8UhOayeNrRV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSuKSaNWalv86I_2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxF_4XxL9XKIH7YjP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxD1QXvKhsReAdalIB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlQuXBS9pdW3kUHEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyvycp1cdJKw3ok5OV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwhOahTHZ3GMBAHOyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]