Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
OK, full disclosure I regularly listen to Mr. Ballin strange dark and mysterious… And murder podcast so forgive me if I find it interesting that all of the questions are about how humans would use AI for good??? Have we not thought about the villains of society??? The Voldemorts, Sauron’s… or ummmmmm…Like Agent Smith or the MF Terminator??? Ummmmm… hello. Ok, but for real, I feel like the questions are asked from the perspective that humans are naturally good, and we come to this challenge with the perspective of a human… so if a bad actor asked AI, how do I commit the perfect crime? (😬😬I watch a lot of crime podcasts where people tend to search the internet this Q). Soooooo like what if someone asked AI this question, does AI told the person I don’t think it’s a good idea and I’m not gonna tell you? Or does AI tell people the actual answers? I feel like one topic regarded how emotions play into AI… do certain things cause sadness, regret, empathy, sympathy, if that’s the case… if One understands these emotions, they also have to understand the opposite emotions. So AI understanding the opposite sides what makes us think they’re gonna choose the side we think is right based on morals and cultural understandings. Does AI feel sad if somebody dies as a result of them providing a platform for committing a crime? I felt a disconnect?. Ezra continues the conversation from the human side (devil’s advocate) but Mr. Yudkowski explains from the ai POV. lol, I need a (layman’s terms) translation to truly understand the analogies. This is a case where the parents find out their kids are smarter, and what would humans do when they find out they are smarter than the rest and possibly more influence, curiosity, lack of actual human contact…I mean, think young boys risk taking, but no emotion attached to consequence???
youtube AI Governance 2025-10-16T03:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxYZZUWf1e0BmiKVjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyj61IC9y4O1eajFIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw3j7ix_m4O6fjeX954AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy4-TMZuxbJYngQ8Mx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw58XvKpbBYlzWFchJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyGHqp3D-7GTb5h_Id4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwzjuhdXejZ9g-vYPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxEeUhfSG0D_ImVweV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzohPviAeIyf6Vgdm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxjhUMpviZsQdzeHKJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]