Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Artificial humans will always be better then artificial intelligence, intelligen…
ytr_UgwVvx6J9…
G
Eddie Venegas no that wouldn’t work. Feelings are just reactions to sensory inpu…
ytr_UgyKm3tW9…
G
18:01 providing isn’t thriving. Robbing people of their right to hobbies is a cr…
ytc_UgwVVn5I-…
G
As an artist, I don't really hate ai "art" but I look at it as a way to make mys…
ytc_UgyPWOq36…
G
Technically the consumer won't be able to buy anything if the robots take over s…
ytc_Ugw-W_vAd…
G
Wouldn't it be sufficient to use AI to figure out how to exit the simulation BEF…
ytc_Ugzi1ppnw…
G
Most nurses are safe, at least the NICU. Plenty of evidence that humans don't th…
ytc_Ugyt3wPYQ…
G
If they could segment the WWW by device category and geographically, maybe by co…
ytc_UgyhcYxgg…
Comment
People fearing that these "AIs" are going to become sentient or "smarter than humans" really do not understand the nature of "AI" at all. I speak from a Ph.D. in analytic philosophy, and I have had some genuine fun and entertainment (not to mention horror) testing ChatGPT. These systems are basically the old Eliza program on steroids, as they have access to a vast database and faster processing, but basically Eliza nevertheless. Their failures AS AI are glaringly apparent as soon as you start poking them with abstractions, logic, and paradoxes. And "making stuff up" is par for the course.
When you use them to, for example, generate computer code (which, btw, is one of the niches in which they are actually pretty good, comparatively speaking), they perpetually make up function calls that don't exist and never have existed. Basically, when asked to do "the hard work," they just generate "black box" functions that would contain the actual hard work, and then they gleefully refer to these functions as the solution to the programming question.
What is deeply troubling to me is the effect that "search" (and large-language models in general) are having on society. These engines, from search engines to these "AIs," are increasingly and now quite sweepingly considered to be truth-sources. Phrases like, "Just Google it," are ubiquitous, and human critical thinking and even minimal vetting are right out the window! And the problem is not just that these "results" are self-referential, incorrect, or skewed (which they are). It's what you DON'T see that is also considered "truth." When you "Just Google it" and DON'T see a particular result, or ChatGPT does NOT reference a particular thing, the immediate implication is that absence of evidence is evidence of absence! And so from medical questions to politics to every other imaginable subject, you hear people confidently asserting, "There's NO evidence that ____________, so I know that you are wrong."
Not only are such confident assertions flagrant appeal to ignorance fallacies, but, worse, people are increasingly handing off their "thinking" to "truth sources" that are really broken and even intentionally biased. Thus, national discussions are skewed toward UNtruth and NON-critical-thinking. Yet, the proponents of these opinions masquerading as truths are smugly confident in their critical thinking, research, and truth-awareness super-powers.
People are overwhelmed in information, so it is natural to overly-quickly compartmentalize the sea of data! But it is also the case that people have become absurdly intellectually lazy and intellectually dishonest. Confirmation bias produces a national divide in which both factions live in their own echo chambers of intellectual dishonesty, each side appealing to their own "truth sources," neither of which ARE actual truth sources.
So, our elections choices circle the drain, third-parties can get no traction, and the possibility of reform is as remote as the possibility of reversing the relentless trend toward Idiocracy.
These "AIs" are just the next step in the devolution toward Idiocracy. Their danger is NOT that they will become sentient and take over the world (they can't and won't). Their danger is that they dramatically contribute toward the tendency of increasingly stupid people to make increasingly stupid decisions.
Sadly, the content of this video is not an isolated phenomenon. This video concerned a legal case, but it is repeated wholesale across the entire spectrum of what counts as "information" in our Brave New World.
youtube
AI Responsibility
2023-06-15T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzx4cEe4MPFqVL-RiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyS-jfdVbvwDwOdkT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxoovRq0mkI9Nm4gKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDjUr7K8TDJIWF6dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIL2h1LmjkfsrEz4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzf7HKTEFRtkVWbM_d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyq1PlxEOaJOvbEk094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgykFmoD875y6fODkId4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwsLdKCtakLYwobPfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwX5VJil90G7fAG5f54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]