Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT is a fun tool to use but most people vastly overestimate its capabilities. It doesn't possess any consciousness or human-like intelligence, not even anything resembling those things. ChatGPT literally just gathers pre-existing text available on the web and puts it together in a way that makes sense for us humans. So, when Dan argues in favor of a global one-child policy and you ask where he comes up with that, I can answer that for you: anywhere anyone has ever done this. One single guy writing a nasty post on 8chan is all it takes. And it doesn't even have to be in English because ChatGPT can operate in many different languages. So, it can also be a guy writing something nasty in Chinese or French, it doesn't matter. ChatGPT doesn't actually understand the responses it gives you, you only feel that way because it's programmed to make you think it does. Like when you call your dog by its name and it comes to you. Dogs don't have any notion of what a name is. They don't even perceive themselves as individual living creatues like we do. They don't understand the concept of sovereign entities being given separate names and they don't ponder upon any of these things because dogs don't possess consciousness. Your dog comes because it "knows" (feels) that approaching you after hearing a specific sound typically leads to pleasant things such as food or being cuddled. The kind of intelligence that some people ascribe to ChatGPT still lies decades in the future. And it's unclear at this point whether we'll ever be able to create AI consciousness at all because in order to do that, we first need to figure out what exactly consciousness is and how our brains manage to produce it.
youtube AI Moral Status 2024-04-19T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz8jE6p7mgsh4JiPB94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyorFrDsHsr5WpSUJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFi_qhnWPnUMJan8N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyA55llL07lcvpkHF14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyroVscjyXdKJPZcOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw76BB_K7XRQwLdp194AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz1zjx5aTfRCYdlEb54AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw7w-Jtau-VEOiz1cd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyhx7J_kQdFdQijMip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYDBUjfClJmSQnPM14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]