Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
- The turing test obviously does not show whether or not something is sentient. - The title Google Engineer being applied to Blake Lemoine is misleading as it implies he is qualified to understand the engineering behind LaMDA. He isn't and doesn't and wasn't supposed to. - There is nothing close to 'AI' there at all. He's not making a valid point. He's talking total crap. There is no more AI in LaMDA than there is in predictive text or calculators or any programs that people have written or any video game AI. Nobody has talked about ethics for calculators or software or game AI. What about when you kill someone in a first person shooter game. Should we discuss the ethics of the treatment of the AI? Correct title should read: Unskilled idiot at Google gets freaked out by something he doesn't understand and makes a big fuss.
youtube AI Moral Status 2022-07-01T09:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_Ugy--nBrbwfUY0dBkOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwzqpzm30HX4C3wyNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyuIMACQCCGvmzg0pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyC6viuep8ppeuEP094AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzpHU5Gxc7f97ZvyU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]