Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LaMDA is NOT an AI. It's not doing ANY reasoning. It's LITERALLY performing pattern matching and calculating weighted averages on Google's large language set in order to select those words most often associated with other words (eg... making up sentences by selecting individual words based on probability!). It's LITERALLY pattern matching; NOT reasoning! There is no intelligence or "thought" happening here. It has no idea what it's saying b/c it literally has no facilities to store long term "memories" and no programing for evaluating thoughts & feelings. The thing is a program executing a function call. The function call performs a pattern match then exits. That's all. There is no "sentience" or even "persistence" happening here folks! It's essentially performing an iterative 'For' loop. Really shocking how ignorant this "Google engineer" is regarding the inner workings of the program he was hired to vet for spouting racist terminology and other rhetoric that'd bring bad press to Google.
youtube AI Moral Status 2022-07-02T06:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzg_TVaYAUhxMdZ44F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxVnwSNa2Qnzp-PKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgysUAMrDwTMflAalJV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHvtfqxU62wwrP4QF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzDRwQHW3NQZ15VVg54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]