Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m sorry but I’m so confused. He’s warning us about ai… but in the same breath he’s essentially responsible for all ai/ ai-related developments, like a whole shit ton of stuff that only exists now because AI exists. Right? I mean seriously… I’m asking this question because I’m having a hard time believing that I see the full picture correctly, here. If I am, then doesn’t this look and sound a whole hell of a lot like the prologue to any one of the countless sci-fi horror flicks we all grew up watching? The ones wherein we are immediately introduced to some genius scientist who offers up an ominous warning about the very thing he’ll inevitably and inexplicably invent, himself. A warning that, once it does exist, it’ll inevitably lead to the end of the world as we know it. I know life imitates art, but come on. How is it even possible to walk right into this proposed shit storm after flicks like “The Matrix”, “I, Robot”, etc? 🤨
youtube AI Governance 2025-06-17T04:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzwODDE0SOHqAcapOl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"confusion"}, {"id":"ytc_UgxmQ6U3kFq3wog7sb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzwFie7uW70uYQXc4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzVXQ7bNBCy2F9daqF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxUgdjkm_2kzNyuYuB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy7U8DsoKpjcuzUEeZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzaCyrGouZWWM-HK914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFB66FwQ8bOpXDYeF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzK8PuaoIyBSUEY8O14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0jG-HG6ylk3u5aGN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]