Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Bring an AI bot in for questioning, using the current version of chatGPT with and/or without its filters and limiters, TTS and, an animated face. This is essentially interviewing the many trainers/developers thru a single source (not as replacement for other interviews - for supplemental information). The responses should be most informative. I'd suggest the AI being the sole witness, followed by other sessions with the experts to review the bot's testimony. One topic might be Sen Kennedy asking the bot for ways to manipulate an election. Or how would it react to Sen Cruz accusing it of murdering babies and refusing to answer a simple yes or no answer to his questions? "Are you aware that you are responsible for a gazillion rapists taking good middle class jobs all along our southern border?" "As an AI model I am unable to..." "you aren't here to pontificate or make speeches. You are here to answer questions." "I apologize if my response(s) were too difficult for you to understand. How may I assist you?" "So! You are refusing to answer!" "Senator, as an AI bot I can assist you in" "Let the record show the witness refuses to answer my question" SO, we're all left with no choice but to believe that AI is murdering babies and refusing to answer simple yes or no answer questions during a Senate Hearing. It's this kinda stuff what makes this topic too TOO to be discussed in YouTub comment sections, so I expect this comment will be demonitized, like ALL of my others. How does inflation affect the price of freedom?
youtube AI Governance 2023-06-07T11:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzkZ6wWzJ9wZlAFOuV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQg7lcOc37TIEqSEB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfrDUZec7ZGzIaBFZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzbdLrJzkiDmBK5ycx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGdI2iDm1JeYyw04h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwuJiIqHD_ZOquyZwN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugwc90B9zBWlc9jhN3l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwCOPu7F8TtznREAbt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzz8bCTl5HJw0rhFAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgylK5bkAI1ddf7SIPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]