Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand how us as humans feel the need to create an artificial us wit…
ytc_Ugxi5VE69…
G
This is real people but they actualy are dead but they make it a robot😰😨😨😰😰😱…
ytc_Ugx5iN49x…
G
The most ironic thing is that Shadiversity's AI "art" looks terrible even by the…
ytc_UgxluI1DD…
G
A relevant and almost universal example of why this wont exist for at least 2-3 …
rdc_fct0kql
G
Since we can say that if autopilot detects it has created an imminent and unavoi…
ytc_UgxUHhtSQ…
G
in my last year of high school, a whole bunch of people in my THEATRE CLASS used…
ytc_UgxpDUwUF…
G
AI job losses is only a problem for the capitalist economic system. Having machi…
ytc_UgyCTc-_w…
G
Highlights the speed and secrecy with which AI is advancing. Truly frightening …
ytc_UgxptME92…
Comment
After listening to the first few minutes I had a feeling that the guy doesn't really think it's sentient but he knows that it's an interesting enough topic to raise awareness of the whole AI ethics (and AI ethics at Google) issue. He even says something like that at around the 07:00 minute mark but it flies unnoticed by the reporter. It very much seems like he wanted to expose the problem (maybe at least in part himself as an expert) and how Google doesn't handle it well. (TBH, the first thing I thought when I read the news is that they have fired yet *another* AI ethics researcher?)
LaMDA and his conversations are already good enough to sell this bait/stunt to the public. (Otherwise, he'd also run tests that try to prove that the system is not sentient and e.g. it tries to answer meaningless questions as if they were real ones.)
youtube
AI Moral Status
2022-06-29T23:0…
♥ 163
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzfGGdeUd0BGY3Nhm14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzgUFHUpqQBtNpeo_d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI30bCi1l1bQm5cXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwnDx1AYKpJnHJNpmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugw--frEGZsJK4XqD6h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]