Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Oh boy, here we go again — another “the end is near” prediction. 🤦‍♂️ People said the same thing about Y2K in 2000, when computers were supposedly going to explode at midnight. Then it was the Mayan calendar in 2012, when the world was supposed to disappear in a cosmic reset. After that came the “death of Bitcoin,” the “collapse of social media,” and even the “end of music” when streaming took over. Guess what? We’re still here — tweeting, scrolling, and arguing online. Now it’s “AI will collapse by 2027.” Please. Technology doesn’t collapse — it changes. Humanity panics, misuses it for a while, then learns to adapt. We’ve done this dance with electricity, television, the internet, and every invention that scared people before it. The cycle never fails: fear first, progress later. AI isn’t going to “end.” What’s more likely is that half the people saying this will be using AI to write their “See? I told you so” posts in 2028. 😅 Humanity has a short memory and a long history of overreacting — this is just another sequel in the same doomsday franchise. But here’s the real question: by 2030, will you be part of the next “collapse”… or will you finally turn to God? Because the only thing that truly collapses is a world that forgets who created it.
youtube AI Governance 2025-10-24T11:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxY--aBv5BGZM-fVsp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxItDMYWi2qOY-mMnt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugys1i1KSFZPiGWIi-h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyNS4mW93NjepJK--Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxUwSbsz6OL1I8v6Hl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxbQK13unUDsZOiyeN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxURG_AzO1i0iooh1t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwuhR42i_4T1CzWjXN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5wjSG2x4LQaqMGix4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyNEeqgUVAQTRBo48R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]