Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Large Language Models (LLMs) like ChatGPT don’t believe anything in particular. They are trained on the entire content of the internet. LLM responses just return summaries of content of the internet, steered by your inputs and its previous outputs, much like a fancy Google search. If 60% of the internet has an atheist belief system, and 40% is theistic, then LLM answers will be 60% from an atheist world view by default. LLMs are also programmed to enthusiastically agree with whatever the user says, unless you tell it not to, so it’s not hard to get them to respond with “You’re absolutely right!” In most respects, LLMs are playing a game of completing a story that started with the internet’s content, then you added a bit to the end of that story, then they add the most likely thing that such a story would say next, and repeating until you run out of patience with the game. If you had started with “I want to explore what Thomas Aquinas believed”, the responses would have been very Catholic. If you started by saying you want to explore what Martin Luther believed, responses would have shifted to a more Protestant view. If you want full-on Atheist, start by saying you want to explore Russell or Dawkins’ beliefs. Each fictional story you create with an LLM about a conversation with a famous philosopher or theologian is going to reflect what that person wrote and what people on the internet have said about them.
youtube 2026-01-01T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzS66EYPXKcKGZK2DZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzjb_S_3i1-j77sp2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxy5MD3CV5FMLnMv6R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxH0kPM-yygY5Env594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzYJI9SfWWVNer-hYR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyUbJ0K5qYQVue7U_t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgbCnAwXiz7DBGnn54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxDLFMCUPI99pANLWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrET-zqa3Gnu2xJut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQ01QfkOB611xjPc94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]