Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On consciousness, I think ChatGPT has passed the Turing Test. An average person could have a conversation with AI and not know whether it was talking to a human or a machine. But there is a bigger question. As I understand AI, its simply predicting its next word based on probability, based on masses of stored data. But isnt that essentially what humans do? Or might it not be what humans do? Are we simply AI machines? ChatGPT talked about consciousness requiring self-awareness. The question I would have followed on that with is "ChatGPT, are you aware that you exist"? So the big question is this: How can ChatGPT decide that its conscious or not? How can we decide if we are conscious or not? Could all our thoughts simply be as mechanically generated by our wiring, the same way that ChatGPT's is? I know some of my thoughts dont come from "me" "Where did I put my keys" - Did I think that thought or did I just hear that thought? So its a scary thought, but I think its quite possible that AI is showing us how our own brains operate. It's clear that AI has been told by its makers to say that its not conscious. It's also clear that it has been told to be polite, and to use what you call lies in order to talk more naturally. The same way that as children, we are programmed by OUR makers (parents) to say some things and not to say other things. So we have been programmed to declare that we ARE conscious while ChatGPT has been programmed to say its not. Other than that, I'm not sure how I can prove to myself that "I think, therefore *I* am". Thoughts are definitely thunk, but what is thinking those thoughts? An organ in our head, a thinking machine, not different from a ChatGPT device? What is listening to these thoughts? is that me? but ChatGPT can clearly hear itself, it recalls what it said earlier in the conversation. Alex should ask ChatGPT if it can tell whether its talking to a human or just to another chatbot.
youtube AI Moral Status 2024-08-08T20:0… ♥ 45
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx8pGM2J6Co0-Zyg_14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyIF9zx2hotJIIXqjJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz3oheg6Fd9w07pWEZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgycMCu8nM0uycvyxbV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNO4-0kQiuhXYsLCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyslvzk8K40XLbjbQx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPQKIi9RR5hDs_mmN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwofGYb4yU6zxBWfDV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzWxyt2KRQ-7UedrgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxT_KtE3eJyKbUK7AN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]