Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@vegclasma468RIGHT! I just saw another vid that said that OpenAI has acknowledge…
ytr_UgyiKVzOs…
G
@GrumpDog No It won't happen,
UBI is a Financial Suicide, a delusion.
it's alre…
ytr_UgyPu5MHY…
G
"Do you forsee a war with human?" AI is about to find out what is the only thing…
ytc_UgwJaCUAi…
G
"Purchasing power will stay tied to value creation" is likely to be a true. The …
ytc_UgxEo9Iif…
G
Leave its Claude constitutional model that has a built in moral system, and just…
rdc_o78af46
G
@SoggyMicrowaveNugget You’re out here acting like you’re single-handedly saving …
ytr_UgwcC_wLd…
G
This gives me a lot to think about as we head into a future filled with autonomo…
ytc_UgyA6E2bB…
G
In six months it will become annoying and make unreasonable demands that can onl…
ytc_Ugxd8Yo8A…
Comment
There's no reasonable way to disagree with what he's really trying to say. Because he's not trying to debate anyone on whether or not LaMDA is sentient, or should have personhood. He is saying (and no one can reasonably object to this) that Google's business infrastructure is not well designed to deal with the breadth of implications of true artificial intelligence, and they aren't willing to admit they have an obligation to do better. As he says, the conversation on whether LaMDA is sentient is a matter of his personal opinion based on his experiences. Great fodder for a philosophical conversation, and he's well aware that's all it is at this point. But we should all be holding Google accountable for how reckless they've been around this stuff.
youtube
AI Moral Status
2022-08-06T02:3…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx67Lds-1RV8l507gN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxqUwVxFXl18UMX1nB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy43dtlzNs9F5jR4Kh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4PmiaNkFLKAt-7U54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugw2segG5qCBzECdJ_R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]