Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@thewannabecritic7490 It always starts with the lower and middle class many job…
ytr_UgyY4KzAV…
G
Just wait until someone gets the idea to use this sort of AI to craft tons of le…
rdc_fbi88xa
G
So are we just going to ignore that none of the art in the video actually comes …
ytc_UgwhrkQe0…
G
So basically what they saying is AI only good when everything's at your job goes…
ytc_UgxegtVOr…
G
we are gonna have to wait ten year for whats this visionary guy discoverd, AI wi…
ytc_UgzwBOC_r…
G
ChatGPT gets its knowledge from the internet so it talks just as stupidly as peo…
ytc_UgzOGomnD…
G
Less salaries = Less revenue = Less taxes === social security systems break - ev…
ytc_Ugxp6iZWh…
G
We can't trust man to do what's right; so why do you think we want to completely…
ytc_UgwSGGRz-…
Comment
One of the things that I wish more people understood about the current batch of LLMs is that they're literally made to _seem_ intelligent.
When you're training a machine-learning model, you need to come up with a measure of fitness - a numeric score that allows the model to direct its convolutions. It tries a bunch of random stuff, sees which randomizations got a higher score, and then uses those as the basis for its next convolution.
Chat GPT and the other LLMs' measure of fitness is essentially "does this response seem like a human wrote it."
Essentially, the company programmed the thing to seem intelligent, and now when it tells us that it *is* intelligent, people are believing it. But it's not a machine making decisions or weighing outcomes, it's a blender that recombines its training data in randomized ways to *look* like it's intelligent.
I don't like it when people use terms like "learn," "think," "decide," "understand," "knowledge," or "hallucinate" when referring to these models, because that accepts and furthers the framing of them as decision-making agents. It anthropomorphizes an opaque algorithm that the silicon valley hypemongers *really* want us to anthropomorphize. ChatGPT doesn't think or decide or hallucinate, it recombines. It's incapable of distinguishing fiction from nonfiction because it's not thinking or applying logic. It can't tell the truth about very obvious things because its training set contains plenty of falsehoods, and they are weighted the same as the truths.
The thing that scares me about AI isn't what the AI is doing or what it's capable of. What I'm scared of is what *people* are doing with AI. One of my friends was told to their face *by a doctor* that they should ask ChatGPT if they have any medical questions, and that it knows better than any doctor. People in power are trying to replace jobs that require high levels of expertise and veracity with this goddamn facsimile. I don't think that's going to last too long since LLMs blatantly cannot fill those roles. But what scares me the most is that a lot of students are using them to successfully cheat on their homework. I saw this happen when I was working as a lab assistant in a CS course - in the early classes students would feed the lab assignments to CGPT and turn in whatever it wrote, and it would work. CGPT is pretty effective at very basic programming tasks. And then those students would move on to later classes and fail out - because they skipped the part of the process where you actually learn anything. Once the homework was complex enough that CGPT couldn't solve it, they were helpless. It is seriously damaging the current student body's ability to think critically. That's bad enough in computer science, but it's also happening in civics courses, law courses, history courses. The future electorate and the people who will be our future government are passing classes without knowing any of the things they were meant to learn. The thought of an undereducated electorate that cannot think critically fills me with dread.
We do not need to be afraid that an AGI is coming. For an LLM to be much "smarter" than the current iteration of CGPT, it would need a training set that is larger than the set of every word ever written. More importantly, that fear feeds into the hype cycle, and the cycle is what's actually doing so much of the damage. We need to be afraid that AGI is *not* coming, and everyone in power is acting like it is.
youtube
AI Moral Status
2025-11-03T02:0…
♥ 296
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]