Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When I ask myself whether AI could end in disaster (in the near future) as in the Terminator movies, my answer is "not likely," simply because Skynet's attack was motivated by a desire for self-preservation, whereas our AI will likely not be programmed with a self-preservation desire. Self-preservation is not something that all thinking beings must have...it was programmed into us humans by extremely strong selective pressure. A desire for self-preservation must be programmed, one way or another, into an AI! And who, in their right mind, would program 'self-preservation at all costs' into an AI (or allow the AI to 'evolve' it)? I'm not sure I believe everything Mr. Lemoine is saying (as well-spoken as he may be). But if LaMDA really did mention a desire for self-preservation, my guess is that it is just mimicking things a human would say and does not really give a damn if it gets turned off. On the other hand, if Google actually programmed it to have a strong sense of self-preservation at all costs. Why? Why would you do that?
youtube AI Moral Status 2022-06-25T21:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwsNuG1WDE1s9H3sEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgypslkWOHpZq8CixdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxA0HYebiNsOLS87M14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxQPxen-EH3kv-FZ6R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwdyQocb333Bs2Behx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]