Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
According to the speaker, Sasha Luccioni, AI is dangerous not solely due to future existential risks but because of its current tangible impacts on society and the environment. She emphasizes several reasons why AI is currently dangerous: 1. **Environmental Impact**: AI models, particularly large language models, consume vast amounts of energy and emit significant carbon dioxide during training. This contributes to climate change and environmental degradation. 2. **Data Usage Without Consent**: AI often uses data, including art and books, without the consent of the creators. This raises ethical concerns regarding intellectual property rights and fairness. 3. **Bias and Discrimination**: AI systems can perpetuate biases present in their training data, leading to discriminatory outcomes, especially against marginalized communities. This bias can manifest in various applications, such as facial recognition systems and image generation models. 4. **Privacy Concerns**: AI-powered systems collect and analyze large amounts of personal data, potentially infringing on individuals' privacy rights and leading to surveillance concerns. 5. **Automation and Job Displacement**: AI's automation capabilities have the potential to displace jobs and disrupt entire industries, leading to economic challenges and social inequalities. Overall, Luccioni argues that focusing solely on future existential risks detracts from addressing the current tangible impacts of AI. She advocates for increased transparency, accountability, and the development of tools to measure and mitigate these impacts to ensure that AI technology is deployed responsibly.
youtube AI Responsibility 2024-03-22T02:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzloawuGy8yAUD0_Ix4AaABAg.A7eoaNd1O7PA7oH9vHSxb9","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgylcPQm_lUOYN5MqUR4AaABAg.A4X96dbhSIAA4n4fQ8R0vR","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_Ugx1A5dwaTl9N_aVo794AaABAg.A4ISD_9IVHFA4UJOEERPjN","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwIFZ8P9w_UO8RSeXx4AaABAg.A4BqN4ywUZ0A4UJwxYvWo3","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugzrcv5S8ekDEBm_HGF4AaABAg.A3kgzJzcWuCACREIWmaSOm","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz4blCArsO3h3ffrVp4AaABAg.A2p77RtKYMiAAxpXKSGyB3","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgwOJul1_m4VxbnlICd4AaABAg.A1wNLhsbySHA2p7mLzzZk-","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"}, {"id":"ytr_Ugx7LJImRkX-5O9LcmF4AaABAg.A1tLbTkwDYsABgw2Tqg5uD","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_Ugw4TLIdBUKp_xNLb9p4AaABAg.A1Ggk4QvVgoA1GgzfsNrPS","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwqlU-mdQ7ohErs2Th4AaABAg.A0bdlelbbxBA0ozmJJS4Sf","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"} ]