Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
- Sasha Luccioni, an AI researcher, starts her TED talk by recounting an unusual email she received from a stranger claiming her work in AI would lead to humanity's demise. This sets the stage for a discussion about the perception and reality of AI's dangers. - Luccioni acknowledges the widespread fascination with AI, ranging from its positive contributions like medical discoveries to its darker implications such as harmful chatbot interactions and biased AI meal planner suggestions. - The talk moves beyond speculative doomsday scenarios to focus on the present-day impacts of AI on society and the environment. Luccioni points out that AI is not isolated but interconnected with societal structures, leading to issues such as contributing to climate change, using data without consent, and perpetuating biases. - A major concern highlighted is the environmental cost of AI, particularly the significant energy consumption and carbon emissions associated with training large language models like Bloom. She stresses the importance of measuring and disclosing these environmental impacts and advocates for sustainability in AI development and deployment. - Additionally, Luccioni discusses issues of consent in data usage for AI training, citing examples of artists and authors whose work has been used without permission. She emphasizes the need for tools like "Have I Been Trained?" to provide transparency and accountability in data usage for AI training. - The talk also addresses biases inherent in AI systems, especially concerning race and gender. Luccioni shares examples of biased facial recognition systems and biased image generation models and introduces tools like the Stable Bias Explorer to help understand and mitigate biases in AI. - Overall, Luccioni concludes by advocating for collective action to address the current impacts of AI, stressing the importance of transparency, accessibility, and accountability in AI development and deployment. She emphasizes the need for informed choices, legislation, and governance to ensure that AI benefits society while minimizing its negative impacts.
youtube AI Responsibility 2024-03-22T02:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwhjzXiniDk4ZH3By14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6tN1xjt8CVdjFvR94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyaO0mr0yiRxU_4uIR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxiqYOVFnPnigglyYp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwFJcb1FM8qAhBPb7p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgywAx4e51qVy7qGwGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugzs5rwqH5fbHHfl4iF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEc1dPjbu_gsJdWe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4TLIdBUKp_xNLb9p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyvfHuD6z600CA_dl14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]