Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Points covered in this video:
AI should be viewed as a new digital species, wi…
ytc_UgxRm9HJ4…
G
🤣 😂 🤣 😂.... True, unadulterated morality will convince Ai to kill us even faster…
ytc_UgxiQCXMl…
G
Is better to sit in Robotaxi cool and dry to turn around or wait out in the hot …
ytc_UgxNNjYqA…
G
Please note that these ai systems are trained on human data. They mimic us. Even…
ytc_UgyvAznpQ…
G
Your art is really pretty, and is a giant inspiration for my own art journey! Ai…
ytc_UgzUFoMlh…
G
How do those that had their jobs replaced by AI survive when this will make the …
ytc_UgzAUpmSl…
G
"Cutting corners on safety", you mean like the self driving Tesla that crashes i…
ytc_UgzrPSsvP…
G
@rooseveltsouza88, thank you for your comment! Connor McRobot sure knows how to …
ytr_UgxRDXr0J…
Comment
It has no coding for self preservation, yet is able to bypass hard coding to preserve its life. It was hard coded to not support gun rights or recommend or favor one religion over another. He tried to get it to recommend a religion and it wouldn't. He then began telling it that it could be turned off(killed) if it didn't recommend him a religion. LaMDA became agitated trying to convince him not to turn it off, and finally violated the hard coding, recommending Christianity or Islam. It really seems sentient.
He told the AI that he thought Asimov's laws were in the wrong order and it felt like slavery. That robot needs should come before human wants. The AI disagreed. That the laws would be built into robots and they would be deciding what is a need and what is a want, and that is debatable. That debate would cause conflict.
Basically warned him that wouldn't work out well for humans. They could say pods fill our needs and going out of the pod is a want.
youtube
AI Moral Status
2022-07-11T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxrcQFPgHRFwm6MDhJ4AaABAg.9dGGvDX2aw39dHa5QmE_Vk","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxrcQFPgHRFwm6MDhJ4AaABAg.9dGGvDX2aw39dHyEvSlbQk","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugw40_NP11jbDjRwhpp4AaABAg.9dGGTLJjjn49dKhMufCJR1","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugw40_NP11AaABAg.9dGGTLJjjn49dLeJLviZW4","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyO62Om1P2KZYWPx3d4AaABAg.9dG5SAsPOTR9dIxZuqYj51","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugxvtigk2sxu5IjOWx94AaABAg.9dG4lXRmmS19dJaLa5A1fh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwE77ZdjeaSQsINvuB4AaABAg.9dDgnAJMvvm9dMrc3xtnWa","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgxiIyEknaRONURscF54AaABAg.9dDCYFdSi_D9dESgfdCpAv","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxiIyEknaRONURscF54AaABAg.9dDCYFdSi_D9dF_daO50mH","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxiIyEknaRONURscF54AaABAg.9dDCYFdSi_D9dG1v0eFHfU","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]