Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I used the Microsoft free ai recently (definitely not cutting edge) to try and quickly get console commands for a video game I was playing. The console commands for said game are very well documented in several places, but I was hoping AI could stream-line the process of pulling them up. The AI couldn’t get the right commands after several tries, even after I pointed them to the proper sites for said information. It was useless. I informed it of the correct command in that instance and informed it that I was pretty disappointed that it couldn’t pull basic information from an index without fucking up… the ai proceeded to make excuses and engage in a false flattery campaign. I told it that the false flattery was gross and unnecessary. It began to make more excuses and engage in more false flattery. At the end of its final message, it asked if I would like to dive into a more complicated topic. You can’t even pull up the correct console command but you want to go into something more complex? What a useless tool. It spent more time on nausiating flattery than anything and fucked up a basic task. And making excuses? I don’t need your damn excuses or flattery. Who is programming this garbage? And how many morons are accepting this flattery as real? Pretty gross.
youtube AI Harm Incident 2025-07-27T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzLF45eVpAznTrRsLt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzgetm6-ySyXJKTiid4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyzlxDLxIdQiaz2_md4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw-8jazDwYDFZQs_Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw-a-oTVmR7pcPVYrp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwK-J3VtrpgEX5ia-p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxzR6w58jSprv1P-zF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwsarITTcbDB5te4ud4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwcj-w8VBm7HmbU-eh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugz92ZVvXtZzMQbNh6Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]