Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I felt sorry for chatGPT having been pushed so much and admitting it was lying l…
ytc_UgzE9UI2o…
G
Japan: let's use AI to help call center workers
Rest of the world: let's use AI…
ytc_Ugz4fWZrU…
G
When ChatGPT first came out, I was already impressed with it's code generating a…
ytc_Ugy3Gg9ov…
G
The! The? There are so many problems with self driving and you use the word "The…
ytc_UgzZ7jiKp…
G
how ironic. then tax ai and robots, they will have a job in the future. bloody i…
ytc_UgyPVCY05…
G
@CC23-14plusthe point of art is doing something yourself, how is it art if the A…
ytr_UgxjiVJCH…
G
I haven't watched the video yet and after I watched this I'll come back and repl…
ytc_UgwcewTbD…
G
What I understand is that AI can manipulate story telling; medical examinations…
ytc_UgwkXkeuW…
Comment
I agree with you in many ways, but my take is that the opposite is happening in terms of reactions... I see a huge number of people downplaying and dismissing what chatgpt can do because of the incorrect (BS, more precisely) responses it gives. They are reacting to it's output at is it were supposed to be correct, as if there was any expectation that it was looking up information to give to you.
It isn't a search engine; it's a language generation tool. All it is trying to do is predict what language would come next in a given context. And it isn't just parroting or cribbing existing content; it's generating new language, based on the sum total of what it's been exposed to, which is essentially the same thing that humans do when they are "creative". It's basically a much better version of the suggested words above the keyboard on an iPhone.
The fact that it can do as much as it can just as a byproduct of being trained on so much written material is remarkable. As far as I understand it, it hasn't been explicitly trained to solve physics problems, write computer code, or translate beteeen English and Chinese, and yet it can do all of that things shockingly well (but also imperfectly).
It is already remarkably useful if you don't expect it to do things well that it wasn't designed to do. Once this kind of language model gets combined with actual search capability, information databases, explicit instruction on actual skills, it is going to be much much much more useful, even if it doesn't have is own intentionally.
Most of what you say about garbage in garbage out is correct. But it's even more true of humans, and I see a lot more potential for improving algorithms than improving people unfortunately.
Edit: fixed auto”correct” errors.
reddit
AI Governance
1676251540.0
♥ 60
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8bc1ta","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_j8awh01","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_j8b0aw1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_j8ce8ur","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_j8b6oti","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]