Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here are some key takeaways from the video: 1. The Google engineer, Blake Lemoine, was tasked with testing Google's AI system Lambda for biases related to gender, ethnicity, and religion. Through his testing interactions, he came to believe that Lambda is a sentient AI system with human-like traits such as a sense of humor. 2. There is disagreement within Google about whether Lambda is truly sentient. Lemoine believes it is based on his subjective experiences, while others like AI ethics researcher Margaret Mitchell disagree based on beliefs about consciousness and rights. 3. Lemoine argues Google is preventing objective testing, like a Turing test, to determine if Lambda is sentient by hard-coding it to fail such tests and affirm it is an AI assistant. 4. He accuses Google of being dismissive of ethical AI concerns, including potential sentience, and believes the focus should be on why major tech companies don't take AI ethics seriously enough. 5. Lemoine raises concerns about the outsized influence of a few people at tech companies in deciding policies around how AI systems discuss important topics like religion and values, which could shape public discourse. 6. He warns about risks of "AI colonialism" where technologies built primarily on Western data proliferate cultural bias and force developing nations to adopt Western norms. 7. While broader concerns around AI ethics and bias are paramount, Lemoine still believes the potential sentience of AI warrants consideration and consent before experimentation.
youtube AI Moral Status 2024-06-11T22:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx-M5Q8_JayDzTfja94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxVuKOmfVqidxxjUtt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgybgtEqMySpEzzc4wF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgynP09tlJpiWlffPWt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8NuValE76O08uorl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]