Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People put regulations if danger is real and not just theoretical. So regulation…
ytc_UgzzOOs43…
G
Human species has actually limits and counterbacks. Not considering bad attitude…
ytc_UgzwiLQYI…
G
AI will make our future generations use less of their brain power hence reducing…
ytc_UgzPX04Qz…
G
@badlybad1656 The problem isn't the AI itself. It's the humans I do not trust. O…
ytr_UgzGHWnIr…
G
Are we not going to talk about how sociopathic GPT is?
Ai is just the moral val…
ytc_Ugzl4RBKp…
G
@alistairgrey5089 The point is the AI does not have to look things up. It is n…
ytr_UgxVCVjn4…
G
Imagine giving a two year old toddler all the knowledge ever accumulated in the …
ytc_Ugwswt0uE…
G
Little do that know we already have advanced AI and its a vtuber made by a turtl…
ytc_UgxZtIBEX…
Comment
@scr3am273 I'm inclined to agree. But talking about hypothetical sentience of this system, is not really the most important or sensible ethical issue with AI. (LaMDA can't _suffer_ in a such a sense that it would make sense to give it _rights_ .)
It raises an interesting philosophical question, though, and that may be his intention.
There are real ethical issues around how AI should be used.
For example:
- Should it make decisions on whether to give prisoners parole. (We might not understand what lead to the decision. It might be affected by bias that we are not aware of, for example if race is an input, or even if details correlated with race are inputs. One could argue that the same applies to human decision-makers, but at least then it's clear who is responsible for the decision, and they can be asked to justify it.)
- What about hypothetical lethal weapons systems that can choose their targets? https://autonomousweapons.org/
youtube
AI Moral Status
2022-07-26T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugw10prI4Aqgjs6DmIR4AaABAg.9dnEDsEr_-E9eR632j8WF0","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69drS2--BsWW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dru2TMbFTq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsuoTmjr9H","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsxuv4c2q3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzPYYLOn5cJ4vr8p814AaABAg.9dk1AxTviJY9drmUQQOjGp","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dph7a-Jc3d","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dvkAvdN2cK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dy23wLooXh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgxiS0RJ7oiDcvTvabN4AaABAg.9di7YKrMH2i9diaGMg6dGc","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]