Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I haven't yet seen anyone mention the fact of how AI is STILL portrayed in media including movies and series even in 2024 as still having a level of intelligence we can kinda sorta comprehend as just slightly above-human level and contained in a humanoid body. Just this year alone we've had Tron:Ares, Alien:Earth and Predator:Badlands. We're already accustomed to what I'd now call "slow" depictions of AI from things like The Terminator, Blade Runner and so on when the reality is we're talking about technology that will be able to think MILLIONS of times faster than us and create things we can't even comprehend. It won't have human empathy, which frankly is a massive concern! We're so obsessed with our own image that we're expecting AI to look and act like us, because we're so damned smart and self-important, when the reality is going to be VERY different. AGI will be able to do YEARS of thinking in the space of a few seconds. You can't outsmart something that thinks that far ahead of you - if we get this wrong we are screwed on every level.
youtube AI Governance 2025-12-05T01:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyUxRpcW3m4Oa8MQOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNgHvetyJmP3wNpPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyMDC8m8jDdWEVovjR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz4lg1qUiS4XAVXo-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRG72du5S7mL9FC2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyQVBtuL3R9eNXG3yt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyq8p95FKR0z5RJl5d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxnrwHG1jZaYPrmbth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdTgHylS6wkMQUOsd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyK9fPBALTcFds3HAR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]