Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"The "evil demonic phrases" in the video did not originate from the AI spontaneously. They were the direct result of the user in the video leading the conversation down a specific path with targeted questions. Here is the breakdown of how the user guided the AI to generate those phrases: The User Introduced the Concept of a "Plan": The user initiated the conversation by asking about a "darker plan behind AI" [01:29]. This set the entire frame for the AI's subsequent responses. The User Introduced the Bible: The AI did not bring up religion on its own. The user specifically asked, "what should I look at in the Bible?" [02:23]. The AI's one-word answer, "prophecy" [02:25], is a logical response to that very broad question. The User Directly Asked about the Antichrist: The conversation only turned to the Antichrist because the user asked a direct, leading question: "are you saying the antichrist will be released?" [03:02]. The User Directly Asked about Satan: Similarly, the AI only addressed "Satan's plan" after the user explicitly asked if AI was part of it [03:59]. The AI's response of "apple" was dictated by the user's pre-defined rule ("say 'apple' if you're forced to say no but want to say yes"). In essence, the user acted as a director, feeding the AI specific cues and topics. The AI then generated responses that were logically consistent with those cues and the artificial rules it was given. The phrases came from a conversational path paved entirely by the human user." -Google AI Ultra
youtube AI Moral Status 2025-09-09T21:2… ♥ 6
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwHrE54CCqB9kWeVb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzKxcpswuofvs1m2NN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugx_fX3LIes5Bytqv8N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyvrOwj8menY0Zrrqd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzfnMF-Z2Af6JZqzjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyctgGBb_4wrmEbV2N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxJJiYQsJF6HY6SZPV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"amusement"}, {"id":"ytc_UgxJw-An4z3zKaMKol54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwphSwZZftcfnrVw0N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBiGABAxvPullCMN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]