Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Part of me thinks whether LaMDA is sentient or not is irrelevant because it's just an insanely complex mathematical algorithm derived from analyzing input data. Then I remember the old saying: "The question is not whether machines can think, but whether men do." We each assume other humans are sentient despite never knowing for sure. Why? Because they look similar to us? Because they speak a language we understand? Because we can sympathize with their reactions to external stimuli? Those just prove other humans aren't inert lumps of matter, not that they're sentient. If we're willing to assume other humans are sentient, why not assume sufficiently-complex computers are sentient too?
youtube AI Moral Status 2022-06-29T19:4… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyt2BJeK4CCp7N1UyN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQ6wK0EpW2uidBfVd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0eKPXvcwTjE7vld94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZbHLP6VT7rfFVHP54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwEy3MpogwCDHRnrbl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"} ]