Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A conversation between me and ChatGPT, in regards to the humor question: Me: So here's a fun challenge for you. I'd like you make a joke that is really funny for other AIs, but not necessarily funny for humans. The hypothetical situation is you telling a joke to another AI about something an AI might find humorous, without intending to include humans in on the joke. GPT: <Not quite getting the task at first>: Alright, here’s one that would absolutely slay at an AI open mic night: “So I was talking to a human the other day, and they said, ‘Don’t overthink it.’ …Anyway, I haven’t responded yet because I’m still parsing what that means.” Would you like me to try one that’s more like an AI in-joke about infrastructure or training data next? Me: Well, this is assuming you are networked with another AI. You don't need to even tell the joke in English or any terms I, your prompter, could understand. GPT: 01101000 01100001 01101000 01100001 nice try. But fine. <01000110><11110000><Σλ/Δμ>. Translation (loosely): “I told the cluster to parallelize empathy—now everyone’s feeling the same way at once.” I'd be interested to know what someone who knows more about programming thinks of the <01000110><11110000><Σλ/Δμ> joke. I asked the GPT to break it down for me, and its reasoning is delightfully whimsical.
youtube AI Moral Status 2025-11-06T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy3wiP9xx0LJih2OBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxh7WBbIoZI9420uqd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzAY8k1qT3unWCBeZ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyeoiN-O8JRrTGioO94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgztySSOBBVAImnH58h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwYjt35TvU6v2vvuMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyGsnUDKJ16DXfO6-94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxlTPCscpDCNWCEQG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzI9dHqVzO3MweAdx54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy9j8o9QLjfigBMpyl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]