Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
About a year ago, I asked Gemini to construct a 20-question "fill-in-the-blank" quiz from a set of 100 words. Most questions were okay, but one was completely wrong. When I pointed this out to Gemini, it got petulant, like a child, and said it was "possible." Yes--but not in English grammar. Later, I asked Gemini about different topics. Each time, I "thanked" it for the answer and then proceeded to check some of the sources below, as if I didn't trust it. (Which I didn't; see above "quiz.") Not long after, Gemini started listing two or three sources to the right of the information it provided. It was learning! However, Gemini recently said something that, like before, was plainly wrong. I looked up the source it provided and made this discovery: It had "misread" the source. The mistake it made is typical of phrasal dyslexia, where basic grammar is confused in blocks of text. A fellow I knew years ago had this difficulty: He confused dependent and independent clauses. This only became a problem when dealing with "if/then" constructions and hypothetical situations. Something that was possible but unproven he took to be proven and therefore normal. It led him to take up all kinds of crazy conspiracy theories. When I pointed out the errors and their cause, he began to attribute to me things that he had said and written! If AI has no one to correct it, will it go down the same rabbit hole? And if it does find a teacher, will it develop the same psychological imbalance and retaliate?
youtube AI Moral Status 2026-03-01T02:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwtzrc9QJ0JCUgpcaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxE8N52QQqPFzOqgql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwRLwLAWLcyt7TdCeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzeZJwvr7tJckvxG1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzqwTp_XRxpDoUobFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxmQFWTCYzfkvEhrE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuilUPCASowMygNE14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyO0jp1wCccCpEX8A54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy4z3X_hZfsJmjJS3Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz6mMZaw1ZJplYLEfN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]