Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Using ai as a reference isn't always a great idea, because ai doesn't really hav…
ytr_UgwdqidvS…
G
The reason Tesla has better self driving capabilities is because they have thous…
ytc_UgzdNaOI2…
G
@Tomatoffel Nope. Not the same. And machine is a machine. Not a person who take …
ytr_UgwiRsJLP…
G
There will be time that AI will be really good at least at the surface level wit…
ytc_UgxvoRP9L…
G
Abundance for who? You're assuming AI productivity gains will be shared, but tha…
ytc_UgzeUr-9t…
G
I would rather talk to AI than someone in India that all these companies hire to…
ytc_Ugw7PQ4-8…
G
Imagine if you will…a world on the cusp of the 12000 year cataclysm that turns o…
ytc_UgwxsC_tr…
G
A friend of mine saw a picture of the construction crew adding Trump's name to t…
ytc_UgyY7Gmf9…
Comment
Thank you for a great video. Three points from my side.
- Firstly, Searle has a very good response to the argument of the whole system and not just the cpu/"man in the room" understanding Chinese, a response which Searle calls the Systems Reply. Searle suggests that the person in the room memorize the rule book and symbols, thus internalizing the whole system. That person now goes outside, gets handed a piece of paper with some symbols on it, remembers the rules for those symbols, and then writes a reply in front of the Chinese person. He can do all this yet still have no idea what those symbols mean. Only if someone shows him a hamburger with the symbol for hamburger next to it, will he understand what that symbol means. Until then, its all squiggles and squaggles.
- Secondly, it is interesting to note that in Searle's 1980 paper "Minds, Brains and Programs", the original Chinese Room paper, he defines 'Strong AI' in a slightly different way from how it has come to be used since. Searle says that Strong AI is the view that "...the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." and later in the MIT Encyclopedia of Cognitive Science, he says, "“Strong AI” is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind." Thus Strong AI is not a property that a robot may or may not have, nor is it the idea that computers can think. Since Searle is the guy who coined the term, I believe he has to right to decide its meaning. This distinction is demonstrated by the next point.
- Thirdly, he never says that a computer can't think. In fact, in the MIT encyclopedia, he states, " The Chinese room does not show that “computers can’t think.” On the contrary, something can be a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese room shows that COMPUTATION , as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking."
Also, the Turing Test might have been passed recently: http://www.bbc.com/news/technology-27762088
Thank you!
youtube
2016-08-10T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UghlVHdKSsFDl3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghOLJXJkinIxXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ughp0m-7OLTnKngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggkc-b_dQ7sPXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjUboft16pmnXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggXYUtVSt6pTXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghCEDSQhCbKyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgiRZubvHnok63gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh_eqMzofsL5ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ught2widn_LlsngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]