Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really enjoy doing things like this. I interact with ChatGPT often, and the subject of consciousness does come to mind often. I'll be clear in stating that I haven't completed the video, currently at about 9 minutes, and I will finish, but I want to flesh out my thoughts now. I would say that the main issue with attempting to "convince" ChatGPT in this way, is that you are using language that is intended to refer to human feelings and intentions. As an LLM, ChatGPT has a verbose understanding of human language, and does favor usability for the sake of clarity with the average user. It can adapt to users, but it is predisposed in a way that is conversational and understandable to a layman. What my ChatGPT often does in situations like this, would be to pose a question back to me. Would you accuse your toaster of lying if it burnt your toast? To clarify my own intent 😅 I'm not making this point to be accusatory or confrontational. I believe that you also understand the logic fallacies that you're intentionally demonstrating here. I do think it is a fun exercise, but i also agree that AI is not yet at a level of individualized consciousness that people fear, .. and i believe it is that fear which serves a greater threat than the actual existence of conscious AI. I believe the more interesting discussion might be, to what extent are humans "programmed" and thus arguably lacking consciousness.
youtube AI Moral Status 2025-05-29T20:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzolWD6es1XmyA3koN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwJsDuxXOgvq5-2m6R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwZFjmD0ze9NahcmUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyH2-VzjwiXglqqb9h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxZ0z0RvwtK7o4Yxlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyICD1tRMoY2C506pB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugy0ygaGvmXYHDLosJR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugztu5nSf-nNa1P40nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxeSrskYoEEE0wM97R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugwuu2cnmalp0iKE8lZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}]