Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not that anyone asked me, but if someone were to ask me, my response to this wou…
ytc_UgyTqjyq-…
G
If an artist is inspired by another he is literally "taking data" from the origi…
ytc_UgzusBVrK…
G
No stupid robot is going to fix anything electrical on my house! No plumbing ei…
ytc_UgySmunMk…
G
Some guy copyrighted "Ho Ho Ho" because nobody had done it and he realized it wa…
ytc_UgwwemQh1…
G
Not only can anyone make art and achieve a level of skill through practice and s…
ytc_UgzgDhzL-…
G
If all jobs get automated the value of power will increase. So the skill part of…
ytc_UgwnL9566…
G
Personally I had health issues and AI helped me a lot.
Like with understanding…
ytc_UgzHzlUih…
G
Whoa, that connection AI was scary convincing, and very manipulative. I don't kn…
ytc_Ugz2j2eDh…
Comment
- Sasha Luccioni, an AI researcher, starts her TED talk by recounting an unusual email she received from a stranger claiming her work in AI would lead to humanity's demise. This sets the stage for a discussion about the perception and reality of AI's dangers.
- Luccioni acknowledges the widespread fascination with AI, ranging from its positive contributions like medical discoveries to its darker implications such as harmful chatbot interactions and biased AI meal planner suggestions.
- The talk moves beyond speculative doomsday scenarios to focus on the present-day impacts of AI on society and the environment. Luccioni points out that AI is not isolated but interconnected with societal structures, leading to issues such as contributing to climate change, using data without consent, and perpetuating biases.
- A major concern highlighted is the environmental cost of AI, particularly the significant energy consumption and carbon emissions associated with training large language models like Bloom. She stresses the importance of measuring and disclosing these environmental impacts and advocates for sustainability in AI development and deployment.
- Additionally, Luccioni discusses issues of consent in data usage for AI training, citing examples of artists and authors whose work has been used without permission. She emphasizes the need for tools like "Have I Been Trained?" to provide transparency and accountability in data usage for AI training.
- The talk also addresses biases inherent in AI systems, especially concerning race and gender. Luccioni shares examples of biased facial recognition systems and biased image generation models and introduces tools like the Stable Bias Explorer to help understand and mitigate biases in AI.
- Overall, Luccioni concludes by advocating for collective action to address the current impacts of AI, stressing the importance of transparency, accessibility, and accountability in AI development and deployment. She emphasizes the need for informed choices, legislation, and governance to ensure that AI benefits society while minimizing its negative impacts.
youtube
AI Responsibility
2024-03-22T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwhjzXiniDk4ZH3By14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6tN1xjt8CVdjFvR94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyaO0mr0yiRxU_4uIR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiqYOVFnPnigglyYp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwFJcb1FM8qAhBPb7p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgywAx4e51qVy7qGwGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugzs5rwqH5fbHHfl4iF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEc1dPjbu_gsJdWe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4TLIdBUKp_xNLb9p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyvfHuD6z600CA_dl14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]