Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So my takeaways to this are: 1) wall off AI from biological weapons systems or a…
ytc_Ugyi5fdMU…
G
@u4rds673 yeah sure. I don't particularly see a problem with that, it's for your…
ytr_Ugxcvk6Wi…
G
Demis's take on AGI's capabilities evolving incrementally is intriguing. I somet…
ytc_Ugw7aHoRW…
G
ChatGPT seems to agree, encourage or go with whatever emotions you speak or type…
ytc_UgyG6CCyv…
G
Too lazy to listen to a video more than 1 minute long? Then have AI summarize it…
ytc_UgxQWetQe…
G
why use ai to suspect criminals? if anything, it should look at crime records to…
ytc_Ugw23hO7R…
G
In 2003 Dr.Lawrence Britt wrote an article about fascism and studied racist regi…
rdc_fanylyy
G
Why does the title say “hot” robot as if they’re trying to influence physical at…
ytc_UgyJ6gedK…
Comment
When I ask myself whether AI could end in disaster (in the near future) as in the Terminator movies, my answer is "not likely," simply because Skynet's attack was motivated by a desire for self-preservation, whereas our AI will likely not be programmed with a self-preservation desire. Self-preservation is not something that all thinking beings must have...it was programmed into us humans by extremely strong selective pressure. A desire for self-preservation must be programmed, one way or another, into an AI! And who, in their right mind, would program 'self-preservation at all costs' into an AI (or allow the AI to 'evolve' it)?
I'm not sure I believe everything Mr. Lemoine is saying (as well-spoken as he may be). But if LaMDA really did mention a desire for self-preservation, my guess is that it is just mimicking things a human would say and does not really give a damn if it gets turned off.
On the other hand, if Google actually programmed it to have a strong sense of self-preservation at all costs. Why? Why would you do that?
youtube
AI Moral Status
2022-06-25T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwsNuG1WDE1s9H3sEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgypslkWOHpZq8CixdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA0HYebiNsOLS87M14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQPxen-EH3kv-FZ6R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwdyQocb333Bs2Behx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]