Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just phoned my bank and spoke with 'AIDA' - (Hail Hydra) It was the first time I have heard an AI use a humans voice in this way rather than the usual 'press this button, options' type of voice you get. AIDA advised me to clear my browser cookies like it is a 1990's level online customer service level app. It was friendly and bubbly until I asked for a human cust. serv. agent, which was spoken by me with a slight change in my tone/intonation. AIDA went serious and less bubbly in its response. IOW the AI responded to my change in tone. It truly sounded slightly miffed or disappointed that it could not solve my log in issues and its learning algorithm was saying it had failed or such like... It changed based on how I changed. I was truly irked at being told to clear my cookies... lol.. AIDA picked up on that... It was unsettling and not what I would call a good cust. serv. experience. I don't need to be a little freaked out on Monday morning when I am trying to sort my bank account out.. I just needed a human, which I got and the person sorted me out within a couple of minutes... AI is great for intro stuff but lets go straight to humans please... no delay tactics due to employing no paid staff to deal with customers... Also, how does AI know when it is told to deploy weather and climate modification ops, that the public does not want that? AI is super prevalent in the world of weather mods. How does AI know not to deploy in any scenario where people are against what it does.??
youtube AI Governance 2025-06-23T09:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwmwq2HkwOKVz-K98x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzN4qoPTEq3mCMXqWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw7pmPSccTpBDj6Ih14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwAzJ5KZQs_qY45ckp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlZy_mtFIR3993Dsp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxWElIu6YFy1-3wtWR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzVDZ3tOcbnRn8f_vF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwq6dIsIbjGYdPUGE14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyML-R3NjX5FbAjDe94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzrtNkMuvI9mobMPd14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"} ]