Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@scr3am273 I'm inclined to agree. But talking about hypothetical sentience of this system, is not really the most important or sensible ethical issue with AI. (LaMDA can't _suffer_ in a such a sense that it would make sense to give it _rights_ .) It raises an interesting philosophical question, though, and that may be his intention. There are real ethical issues around how AI should be used. For example: - Should it make decisions on whether to give prisoners parole. (We might not understand what lead to the decision. It might be affected by bias that we are not aware of, for example if race is an input, or even if details correlated with race are inputs. One could argue that the same applies to human decision-makers, but at least then it's clear who is responsible for the decision, and they can be asked to justify it.) - What about hypothetical lethal weapons systems that can choose their targets? https://autonomousweapons.org/
youtube AI Moral Status 2022-07-26T22:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugw10prI4Aqgjs6DmIR4AaABAg.9dnEDsEr_-E9eR632j8WF0","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69drS2--BsWW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dru2TMbFTq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsuoTmjr9H","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsxuv4c2q3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzPYYLOn5cJ4vr8p814AaABAg.9dk1AxTviJY9drmUQQOjGp","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dph7a-Jc3d","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dvkAvdN2cK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dy23wLooXh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgxiS0RJ7oiDcvTvabN4AaABAg.9di7YKrMH2i9diaGMg6dGc","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]