Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Around the 37 minute mark the conversation shifts to the public and private faces of the billionaires who are deeply vested in AI and who think that they can use it responsibliy even though they see the outcomes as distinctly negative in private conversations. This is classic hubris, and wherever hubris goes, nemesis follows. (It is the One Ring problem of the responsible use of extreme power, and that even with the best of intentions that much power will ultimately cause catastrophic consequences. Hence my reference to Hubris-Nemesis. Even if it is *merely* unintended consequences, that is precisely what will follow, and we are headed down that road. I don't know what to suggest, because games theory will lead to races based on what other players do and expect others to do, and that is how it all goes deeply sideways.)
youtube AI Governance 2025-06-19T18:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzqsPgpYMdeh4qBafZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyD3Y_NWKuQK_w0Nyx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzaV0rP8zakKld5YR54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxXCmAKdGQne-DvCiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPU1mkzxXZ5aPr2iN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzo6vo1ugsWgWUfBeR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyugV5b0_J8gcBiIQt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwhwHmki-jJLOTxnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwR3VFGq_BQHK_rcBl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz6tH3QYNAX1SO_Y8V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]