Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's like the psychology of an incel, except instead of believing that you're in…
ytc_Ugz8o2fX5…
G
If one can predict outcomes in advance then it is not AI. On this note, algorith…
ytc_UgxLd5LNp…
G
Thing is, there is zero way to check how good the AI does, apart from some stand…
ytc_Ugw_2JDVS…
G
All useful content. Props to Google for watermarking their AI generated content.…
ytc_UgzMW866n…
G
Robot was not allowed to take his 15min break this is how i get too…
ytc_Ugx5Kllb6…
G
What absolute BULLSH*T. What "love" has this thieving criminal liar ever shown f…
ytc_UgzdfR_QU…
G
When it comes to the topic/conversation of *Artificial Intelligence (A.I.)* , th…
ytc_UgzrNPEs1…
G
You are awesome! I have a better understanding how to use ChatGPT and thank you …
ytc_Ugxcb7Xqc…
Comment
Lemoine states "We should think about the feelings of the AI" even though we know perfectly well that LaMDA or any other Language Model based on similar technology simply cannot have any sort of feelings. This person has put his belief system, that includes the idea that a conceptually simple program can have feelings, above any sort of scientific knowledge. LaMDA is designed to regurgitate responses based on a knowledge corpus that is arbitrarily chosen. Its responses reflect that. If you train an LM on 4chan content you get politically incorrect (to put it mildly) responses. This has actually been done. Simply because the system responds that it has feelings proves nothing. Also, the Turing Test was devised in 1950. It means nothing to pass it. I remember quite clearly when we used to state unambiguously that a system that would beat the world chess champion would clearly indicate that it was "intelligent". That was not controversial in the 1980's. Today Stockfish (a chess playing bot) has an ELO rating of over 3,500 while Magnus Carlsen is a bit over 2,800 and nobody would ever claim that Stockfish thinks in any meaningful ways although it plays incredibly great chess. The reality is that this guy doesn't understand what he is talking about. He is just another variety of "flat earther". He denies the reality that lies in front of him either because he is ill equipped to handle the task or because he is profiting from his stance. Most probably both.
youtube
AI Moral Status
2022-07-03T07:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxLsq2xC11GEWCkJVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyyDKDQIwxtSAOnpc54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwOV7_AfnrH-WQz9_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyeqVCUwqAYCjXAS1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwI7Zqk4bBdTo3-bdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]