Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ah, the looming spectre of AI, with its promises of a bright future and its not-so-subtle hints of potential doom. Sasha Luccioni, an AI ethics researcher, steps onto the TED stage to remind us that while AI might not be gearing up to terminate humanity a la Terminator, it's still not exactly the trustworthy robot friend we might hope for. Luccioni brings us back down to Earth (pun intended) by pointing out some very real and current concerns about AI. Forget about sci-fi scenarios of AI overlords; let's talk about the here and now. She dives into the environmental impact, highlighting how AI's insatiable hunger for data and computing power can leave quite the carbon footprint. It's like realising your friendly AI assistant might be secretly a carbon-spewing factory in disguise. But wait, there's more! Luccioni also shines a light on AI's less-than-stellar track record when it comes to copyright infringement. Picture AI as that kid in school who always copied someone else's homework but with the potential for much bigger consequences. From music to art to literature, AI's ability to churn out "original" works raises some serious questions about intellectual property. And let's not forget the delightful world of biased information. Luccioni doesn't hold back in pointing out how AI, with its algorithms and data sets, can unwittingly perpetuate and even amplify biases. It's like giving your well-meaning but slightly clueless friend the keys to the internet and hoping they won't mess things up too badly. But fear not, for Luccioni doesn't just leave us in a pit of AI-induced despair. She offers a glimmer of hope with practical solutions. Regulations, transparency, inclusivity—these are the tools she presents to navigate our AI-filled future with a bit more confidence. So, here's to Sasha Luccioni, the voice of reason in the cacophony of AI hype. May her words serve as a wake-up call to the very real challenges we face in this AI-driven world. Let's regulate, let's be transparent, and let's ensure that our AI companions of the future are more Wall-E than Skynet. After all, it's not about fearing the robot apocalypse—it's about making sure we're the ones in control of the machines, not the other way around. Cheers to a future where AI is a helpful tool, not a cause for concern! 🥂🤖💡
youtube AI Responsibility 2024-03-18T04:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwLje3JZidsBiLfa0N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyujmSXYU6qrrU62VN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_VV57TsKeJ1j_CT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz8IuyjoaW-a-KFs414AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzvLH_gJl7QilpZqHV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxv7twu9-wkDVzdH4x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwq4ztuAAH5dfLkdLx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxzcNyKK8kv6FCKg8J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyU5rBuVAxhjflIlix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwqlU-mdQ7ohErs2Th4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]