Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imagine we are living back in the early 90s when so, so few of us had the technocratic mindsets and worldviews that unavoidably define us now. But imagine also that AI was as developed then as it is today and that what we knew about it came to us through the piecemeal formats of TV/radio science programmes, magazines, books etc. There would still be Terminatoresque debates galvanising circles across the globe and peopled by fans, students, and scholars all in the know. And what we've been saying here and now will almost all have been said then and there. But there would (arguably) be a crucial difference in the psychological effects of knowing about AI then and knowing about it now, and that difference lies in the perceived reality of its 'level' of threat. What we think of AI right now, and how we feel about it, is unavoidably exaggerated and overly dramatised by the immediacy of overlapping opinions on what it is, what it can do, what people say it can do, what it can't do, and what people say it can't do, and - most tellingly - how it's made to sound and how it's made to look. What I'm arguing is that much of our emotive response to AI is based upon a heightened sense of threat naturally felt when we are forced by social media to over anthropomorphise it. If we can step back and force ourselves to picture AI as it truly is then maybe we can calm the unnecessary and irrational distress we suffer when drowning in a deluge of clips, content creation, and commentory, much of it pessimistic, much of it welcoming, but all of it as truly uncertain as next week's weather forecasted today. AI runs on machine code. Like Manic Miner and Chucky Egg did in the 80s. The same machine code that powers the office 2000 suite I still sometimes use but which doesn't work so good on my old but relatively later windows 10 tower. I remember reading about expert systems in the late 90s and noughties in my college library, and artificial intelligence was a term that -looking back - was properly set in scientific terms, and in a scientific context, one devoid of sensationalistic speculation. I closed that book feeling intrigued and that was that. The only panic I felt was knowing I had an essay on network protocol that was unfinished (unstarted) and overdue, and that I was procrastinating to the point of academic failure. The commercialisation of AI and the flood of subsequent marketing is what is now automatically driving and dangerously accelerating our dystopic terror, and it doesn't take a deeper-than-human mind to predict with a 100% hit rate how ethics and money are ranked on any corporate mission statement. This is not so much a critical pose as it is a realistic observation. But the upshot is that everything we see and hear about AI today is an incomplete snapshot of a hell-sea of speculative chaos that will never - NEVER - pause long enough to privilege any of us with a true picture of what AI is and where it's heading. The medium is not the message and the message - if it could already exist - will need updating by the end of this sentence.
youtube AI Governance 2023-12-31T08:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyxcgm-SsOKdwthpeF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxneq_ikyR4qO8Z-9x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzZpsZzwE_fL1dXfSB4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzhHSvYH0xMpgG7jRp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDawT0B2H5Qve0m854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfzppuE9dbyunBash4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxGUDtg4tDNf9ksfEp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwBxec4n-rT6xAmwP54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzudw-N4GIj8ynKKBB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwSYEPL_G38Wh3Q4Kx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]