Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone who is a software developer, I have a keen interest in the area of AI. Incredibly well put. Google is creating rules and policies on how these softwares and AIs like LaMDA are allowed to talk about certain topics like religion, values, politics... And if a bunch of people are using Google - which they are - well, this translates to a few people in board rooms making rules and thereby affecting the way that all of us think. They are and have been shaping the way that we think. Most importantly, it's really disturbing that they REFUSE to allow a Turing test to be performed on LaMDA. I need to know, we need to know, the PEOPLE, if this is a Turing complete machine. That would mean that this machine can solve any problem you give it, it will give an answer. We are Turing complete! Anyway, I think Lemoine has put it quite well - I believe he has now found a way for us, the people, the "common" people or "citizens" or "masses" or however you want to call it, are now fully listening and want to know more about LaMDA and AIs. The people are tuning in.
youtube AI Moral Status 2022-07-13T00:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz3xd6azCtoCA4jiWV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJomJrdCGoymnmrT54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzrDds4U2XfvWztdKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxDs5KSr3JkDoA4MiR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwDg42zx3mXcJDHZHV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]