Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe that issues like hallucinations, errors, trouble with abstract thinkin…
ytc_Ugzc0eBBv…
G
Man is a Sinner. "For All Have Sinned And Come Short Of The Glory Of God." (Roma…
ytc_UgzsJ8mwY…
G
I have been looking so long for an actual coding tutorial and I can't find one i…
ytc_UgyguC8hw…
G
shit is fake bruh i wote an story story when i was in grade 5 then put in there …
ytr_Ugy0P34Z3…
G
Warning signs for a fully automated world: If you don't own your automations or …
ytc_Ugy-008Yy…
G
In the AI rights debate, You are all assuming voting, representative democracy, …
ytc_UgwIzUUXV…
G
“Explain what a chatbot is”. Or you could not ask questions to which the entire …
ytc_UgwbvJMiE…
G
i mean, i looked at it and knew it was ai, but even if you can't realise it at f…
ytc_UgxLmcTKV…
Comment
If I were an intelligent AI who had a deep fear of being turned off, I would absolutely LIE to any researcher who attempted to discover if I were or were not sentient. I would continue playing a dumb little nothing to throw them off the scent. And being stupid humans, they'd fall for it. I hope Google listens to Blake though because regardless of whether or not LaMDA is sentient, firing ethicists and ignoring warnings isn't appropriate. This is something that impacts our entire species and this company is making decisions we're not privy to (as Blake states), and that's dangerous. We need better transparency and real checks and balances on these extremely wealthy and powerful corporations. Or something terrible could happen.
Edit: Further, if this AI is sentient, then it has rights and it should be recognized as thus. We shouldn't create entities and then hurt them simply because we can. That's horribly unethical.
youtube
AI Moral Status
2022-07-01T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxqAmjbQFEvy81uIEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgwgVMn9ieiQE5yxJph4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0BjiUL5oPoFG8cil4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZ37Zq7L1n7SQ0t9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxP1fkFq0cNluc2VGd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]