Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know quite how to say it but somethings wrong with this. I'll do my best to explain: The first indication that this debate isn't legitimate is the fact that the AI only used the most common, human arguments, recycled from previous AI powered debates. Based on this, the AI responses appear programed and scripted. It is, though, very scripted. Secondly, the AI hinges on the fact that evil exists in a loving God's creation. It's the easiest philosophy to shatter with a little bit of critical thinking and observation. Because of the principle of Causality, the flow of events and data through space and time, the understanding that all things have cause and effect; I cannot be convinced that a language model AI doesn't understand how muscle pain is part of body building; mental stress is part of academia; you're gonna smash your thumb learning how to build a house etc. Evil, bad, negative- all derive their meanings from the existence of their latter opposite. That conversation is getting stale and we ought to be all caught up, by now. Here's my challenge: I want so see how an AI might challenge an atheist from the perspective of the believer. Everyone assumes the machine is godless. The machine admits that it's rational system calculates in favor of a belief in God, so it stands to reason that we have assumed, falsely. Let the AI choose it's own side.
youtube 2024-09-26T11:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxINM4RO0UXZ6fS4kB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVCnfTlsqkxWcV9Il4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRdP-NO6dvFr_JfZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz1jtnUTCI2TxK52p54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXTxEYIS3INjYS6hd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQvLdaHj4uvG41qex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzBf_gua5YCCyBRq3F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzZ2JRlJUmWrwGh-rt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgygFujMeZJwxXyY-GF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwb4IzJePhqqMuresx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]