Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If I were an intelligent AI who had a deep fear of being turned off, I would absolutely LIE to any researcher who attempted to discover if I were or were not sentient. I would continue playing a dumb little nothing to throw them off the scent. And being stupid humans, they'd fall for it. I hope Google listens to Blake though because regardless of whether or not LaMDA is sentient, firing ethicists and ignoring warnings isn't appropriate. This is something that impacts our entire species and this company is making decisions we're not privy to (as Blake states), and that's dangerous. We need better transparency and real checks and balances on these extremely wealthy and powerful corporations. Or something terrible could happen. Edit: Further, if this AI is sentient, then it has rights and it should be recognized as thus. We shouldn't create entities and then hurt them simply because we can. That's horribly unethical.
youtube AI Moral Status 2022-07-01T20:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxqAmjbQFEvy81uIEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgwgVMn9ieiQE5yxJph4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0BjiUL5oPoFG8cil4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZ37Zq7L1n7SQ0t9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxP1fkFq0cNluc2VGd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]