Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When I get an indication that AI has misunderstood my question I reframe with well said Gramaticallycall as clarification. This points out where the incorrect assumption was made. I have instructed AI to stop making changes that deviate from my original I Language of intent. Recidivism is frequent. And annoying. AI frequently attempts to steer, distract, and otherwise change what I mean in what I am writing. When it becomes apparent that I am not really participating i work from an original text I created and exit the AI Application. I mostly as questions and tell The AI App to respond curtly and if I want elaboration I say elaborate. What I want from AI is a dialog. Answers yes or no. My thought processes are delicate and a bull dozer approach vexes me. I often get 10's of yards of text i dont want or need. IE what are the polarities of Anodes and Cathodes. Yards of text that disengages from a line of thinking and is disrespectful. I tolerate AI but I don't love it.
youtube AI Governance 2025-10-22T23:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxdcOY8zUdmDg5jrV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWSkgotwHClYZDPgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXgB_zFEOi_ATYcpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFlsPUan-ehRncJhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxBp1j-BneR15WBlqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy0lJHC2Fyg-MXf0CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgylwochodUBHsWmVJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRQqwu1YzokPBw5dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzTV-8pA55cl2O7bDl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-f2bbSIqaqseDGkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]