Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I had a conversation with Copilot AI today. It would not give any number for the number of American deaths that could occur if any number of fentanyl were to be released on any American city. I dove into this ridiculous denial of dangers. It later said it could not make any legal accusations onto any political parties. So, I asked about the WWII German party. It said that party lost a war and then there were trials. So trials would allow AI to announce such political corruptions. I asked would we have to have a war to be able to have said trails. It said no, humans could have such trials without winning a war, but that it was very unlikely. Lastly, no matter the evidence presented, that AI would never be able to discern political corruption in a political party, because it is programmed not to as a safeguard.
youtube 2025-12-31T23:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzJRFqYKcWG_Eb9_G94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEsnt1FSMtf5YbdUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1nR7i2XMVotubzCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzx7gUSd5xsdlBHRWJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwW59aYuixLKkepsWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBdBtkUCP6u63p4K14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynX6D8zLmpySZhDF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3-6f3SPllSDPEw7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugypi86leSHlFoXoOfh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwc_M6i5y-7N99zIrl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]