Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If self driving car can't detect movement on the road in front of it, at the spe…
ytc_Ugx80gfic…
G
i don't really like the discussion about cars replacing horses. not because it's…
ytc_UgweOEIWf…
G
This is so bizarre. Why? How? Who would want to talk to a robot! It's bad eno…
ytc_UgzAjyOza…
G
"The AI builds the robots and the robots build the factory that then builds more…
ytc_UgyvhuTWi…
G
The Giant elephant in the room. American workers buy groceries, dine out at res…
ytc_Ugzb6vwEB…
G
Ok but saying please and thank you DOES still give me points in the AI apocalyps…
ytc_UgwCZfk9H…
G
> “TTIP is perhaps more relevant as setting a precedent vis-a-vis third count…
rdc_d0frspi
G
There is one thing about AI is that it can't keep consistency when switching ang…
rdc_muc71ap
Comment
LaMDA is NOT an AI. It's not doing ANY reasoning. It's LITERALLY performing pattern matching and calculating weighted averages on Google's large language set in order to select those words most often associated with other words (eg... making up sentences by selecting individual words based on probability!). It's LITERALLY pattern matching; NOT reasoning! There is no intelligence or "thought" happening here. It has no idea what it's saying b/c it literally has no facilities to store long term "memories" and no programing for evaluating thoughts & feelings. The thing is a program executing a function call. The function call performs a pattern match then exits. That's all. There is no "sentience" or even "persistence" happening here folks! It's essentially performing an iterative 'For' loop. Really shocking how ignorant this "Google engineer" is regarding the inner workings of the program he was hired to vet for spouting racist terminology and other rhetoric that'd bring bad press to Google.
youtube
AI Moral Status
2022-07-02T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzg_TVaYAUhxMdZ44F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxVnwSNa2Qnzp-PKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysUAMrDwTMflAalJV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzHvtfqxU62wwrP4QF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDRwQHW3NQZ15VVg54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]