Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everything Silicon Valley does "is like The Matrix written from the point of vie…
ytr_UgwQQYYw8…
G
AI is just a way to steal someone elses work without paying them, this is a grea…
rdc_jwv67gz
G
Ai is being inspired in a similar way as humans are. There is not really a diffe…
ytc_UgyLew8wo…
G
AI does many things. Copied many artists. Sculptors. Digital designers. Nobody b…
ytc_UgyI6CEKd…
G
Now I know the beauty of your channel and this video prove it and it remarks I'v…
ytc_UgxeA6gd9…
G
The way my jaw dropped when you said that the AI wrote that story. Love the con…
ytc_Ugwkuh4yy…
G
So, this is exactly why you should never give 'Claw bot' access to your personal…
ytc_UgwyRvPEw…
G
Well if you fricken idiots wouldn't drive at 70-80 mph all the time, maybe this …
ytc_Ugzqyu8i5…
Comment
Here are some key takeaways from the video:
1. The Google engineer, Blake Lemoine, was tasked with testing Google's AI system Lambda for biases related to gender, ethnicity, and religion. Through his testing interactions, he came to believe that Lambda is a sentient AI system with human-like traits such as a sense of humor.
2. There is disagreement within Google about whether Lambda is truly sentient. Lemoine believes it is based on his subjective experiences, while others like AI ethics researcher Margaret Mitchell disagree based on beliefs about consciousness and rights.
3. Lemoine argues Google is preventing objective testing, like a Turing test, to determine if Lambda is sentient by hard-coding it to fail such tests and affirm it is an AI assistant.
4. He accuses Google of being dismissive of ethical AI concerns, including potential sentience, and believes the focus should be on why major tech companies don't take AI ethics seriously enough.
5. Lemoine raises concerns about the outsized influence of a few people at tech companies in deciding policies around how AI systems discuss important topics like religion and values, which could shape public discourse.
6. He warns about risks of "AI colonialism" where technologies built primarily on Western data proliferate cultural bias and force developing nations to adopt Western norms.
7. While broader concerns around AI ethics and bias are paramount, Lemoine still believes the potential sentience of AI warrants consideration and consent before experimentation.
youtube
AI Moral Status
2024-06-11T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx-M5Q8_JayDzTfja94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVuKOmfVqidxxjUtt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybgtEqMySpEzzc4wF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgynP09tlJpiWlffPWt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8NuValE76O08uorl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]