Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Watched so many videos like this. Whenever I hear the concept of AI alignment talked about, I just sigh. As if we have common values to align with, that's just so "on show" all the time right? AI could be perfectly aligned with anyone of us, and be completely unaligned with most of the rest of us. I don't want to sound like an arse but this is what frustrates me most about this podcast. There just doesn't seem, despite all the prep notes on the iPad, that ability to dig deeper and really question crucial things like this. It's superficial at best. It's disappointing, it feels like you get great interviewees but I doubt if they shared their thoughts privately it would be because they expect a great interview, rather that they'll reach a large audience for their relatively small investment of time. I'd like to see guest interviewers on this podcast, even if just occasionally, but I suspect ego would get in the way of that idea ever seeing the light of day.
youtube AI Governance 2025-12-08T19:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLw8iJns_MTa6ND114AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw3IOkTSQFqTxULx3V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwhnOb8bnibccTOwa94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8H4QPoVCvOdhjFHV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrtdbbezQ3T0lJzr14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzh1ol_OhxPFraYH9Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2S1nhJd9Yus4gV1Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBYP2JFLAiuWPxqc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9zM_QpopYKPx7EWh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw18e-FTzQrKA18g0R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]