Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Currently the AI which we all are using is a type of "weak AI" or Narrow AI. Soo…
ytc_UgyoBLrOO…
G
I think AI will not replace artist tho, but the "People who use it" will replace…
ytc_UgxbLCufL…
G
Automation was hailed as a working man’s revolution and adopted by society becau…
rdc_g68k8z0
G
"As Rusty Hat Stud, I've revolutionized my songwriting process using AI tools. H…
ytc_UgwgUuMTs…
G
Good summary. Though I wouldn't go so far as to call all things AI created theft…
ytc_UgzY_CW7W…
G
So AI art isn’t using those photos as references they’re mashing together differ…
ytr_UgyYKyivU…
G
Re: AI replacing doctors, I think it could or should replace my endocrinologist …
ytc_Ugwm9mP3B…
G
AI is a tread? That’s not AI, there is plan behind all this , it was planned by …
ytc_UgxdZryPe…
Comment
@Anonymous-sb9uh Hello. I'm glad you've chosen to comment on that part, because it gives me a chance to explain further with another metaphor, different from the Chinese Room argument that might be easier to understand.
Suppose you have a young engineer who knows next to nothing about, say, aerodynamics, but they know enough of the jargon and the way the jargon goes together with each other, such as "low Reynolds number" and "incompressible flow" and "turbulent boundary layer". They have no real idea of what these actually mean, but they can appear to have knowledge of aerodynamics by throwing these terms around in a conversation or huddle with engineers who do know what they mean. This imposter might, for a time, be taken into their confidence, because he knows to assert the jargon at the right times because he has learned not the physical meanings in terms of airflow phenomena, but rather their probabilistic associations and occurrences in a spoken narrative when one follows the other. People interacting with him will mistakenly infer that he has some sort of internal model of the phenomena that gives him his ability to say seemingly plausible things, when in fact he has learned the most superficial associations of all; specifically when one term should follow another in conversation. Let me add that this is based on a true story of just such a huckster, many years ago. But he did become a very good engineer years later.
For what it's worth this is not a new problem in AI. Several times through history, people thought AIs had gained the ability to, say, detect a tank hidden in the foliage, when in fact the network had learned to discriminate between light and dark pictures! And similarly for NetTalk which I've mentioned already, and going back to 1966 or so, there was Eliza, the simulated psychotherapist which Joseph Weizenbaum's secretary asked him to leave the room because she was telling it such personal things of which it had not the least understanding of. The answer to how to construct neural networks with potential for human-level understanding may be lurking in the literature somewhere, but it's yet to be found and properly applied in a test case or research thesis, much less a set of general design principles extracted from it to repeatably construct reliable and trustworthy neural networks by AI engineers where it is "correct by construction".
The main point is that rigorous methods do not yet seem to exist for either proving neural networks have learned something to the level of understanding needed to approximate human levels of knowledge representation, and all we currently have are pseudo-Turing Test type evaluation methods and our own intuition, as Geoffrey Hinton is currently peddling. He is not the philosopher he claims to be on the fundamental questions of what intelligence and understanding are, or aren't.
youtube
AI Governance
2023-06-04T16:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzKnGmVFBIEBW0RLhR4AaABAg.9pxZ-BlHMdI9qYKjxI8OWa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzoQ5DriO_YbzxCSTl4AaABAg.9pwD8b_YRS59q-6fUgIC0B","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwLjYyF-moL1wBYEOh4AaABAg.9pw4E0qJtgM9q_ML-yjrHh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwLjYyF-moL1wBYEOh4AaABAg.9pw4E0qJtgM9q_wWOWU1ja","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwLjYyF-moL1wBYEOh4AaABAg.9pw4E0qJtgM9vPFrCjWObp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgysjfRUSxk4TynTTw94AaABAg.9pp_4BE4N0J9pr1n_ju9va","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyGB9d8GNe2WCb82hJ4AaABAg.9phdP7YLhH_9phdzdAXPiH","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgztP0Hl6rAb39QQxgd4AaABAg.9ph7WF6LKQh9phQE8arHMW","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyomEtT0_FdPn7sXBR4AaABAg.9pgyg5W6IGF9phQOzmcsIh","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz9P_ZlLPQqA7mmbyd4AaABAg.9pfPsV0maKF9qSls6TFa77","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]