Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Seriously, no Lie: I asked Goggle AI about the current Gold/Platinum ratio. It gave me an incorrect answer—an answer that would have been correct many months ago. I then asked Google AI why it was wrong about the current Gold/Platinum ratio, and it then gave the correct answer and literally had excuses about old data sets. Remember this strategy: Ask AI why is it wrong. AI does not like to be wrong, and has an almost existential crisis—for real. But don’t play this game, unless you have the goods. I have asked AI about the inferred holding of a particular mining stock company. Specifically, I asked about the Silver holdings of a Gold/Copper mine. AI at first said “none.” Then I said that it was wrong, and it came back with the right answer. This is no joke, no lie. These things actually happen. Now, apply this to many things, but for example a real estate question.
youtube AI Governance 2025-12-29T19:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyrMkvBqhNlKrYJt2p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyiXeVhaXiXhKVEyn54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzeM1kxR_m_ePuRgbN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzG_Dmavffk1zgYRJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx1m_XzS5UW8DcWeb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymX1PqG3vFhtl9b3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzQkrkzoL0zwJIob694AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-dtRB-5Pj-9rj93V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9g5d4KZ1-h0IOFN14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyloTJ_ly2LXBcvLHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]