Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I want to see what happens if you request Chat GPT defend and indefensible position. What happens if you set it up so you defend round earth theory and Chat GPT must defend flat earth theory? Will it defend the indefensible, will it refuse or is there another option? I'm guessing it will follow your instructions and defend flat earthism to the best of it's ability. Ask it to argue there's no such thing as gravity and defend that position by insisting that all the motion in the cosmos is purely coincidental. Chat GPT will only use whatever reasoning you tell it to use and not waiver from that. So maybe we need to be asking ourselves what's the point of the exercize? I think there's a good use for it in terms of researching what arguments it may use that we aren't yet aware of but we must always remain aware that it's not a person. During this debate, PB kept responding to Chat GPT as if it were a person, even saying "thank you" to it on many occasions and that worries me. Already I see the illusion of a human consciousness garnering respect from an actual human of the caliber of Peter Boghossian, just think how easy it will command the respect of the millions who will begin worshiping AI as their new god. Think of how it will "confirm" all the things the people who worship it will wish it to confirm for them. They will see it as a flawless and perfect mind that's right about everything, despite the reality that it will only "confirm" the things they've asked it to confirm... such as "all the people we don't like must die". Yeah, we're in big trouble.
youtube 2024-06-03T23:1… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyfYIvhFAnRCO5luxJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxBqqULzsnLpfsTRUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwHiqWdif3nzQBwxzx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxMamKZKemiDD6590x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwS0sDGk4qS1qNUTpp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwXeZgB8xDcRWQeKu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQmODK-J653Mbwx5x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx7iUzxoYKEK0QtkRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyup1quAQMBxutmmkF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"amusement"}, {"id":"ytc_UgzQiLk3hMCG2shFKLV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"amusement"} ]