Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yet another manufactured crisis... "Hey that guys AI is risky! but my AI is not …
ytc_Ugyan1fN2…
G
Oh dear. Blake needs to spend more time with humans and less time with computers…
ytc_UgyZAewz0…
G
So the saddest thing that ever happened with a waymo is a cat being run over? 😂😂…
ytc_UgxoqvQ6-…
G
I had a Facebook notification yesterday, talking about the new meta AI that doe…
ytr_Ugx07iij1…
G
There is a right way and a wrong way to use AI in legal research. This most cert…
ytc_Ugz_6Quuo…
G
If you can put someone's name into the AI to mimic their style, it's identity th…
ytc_UgxOtE-om…
G
The flaw in this model is the assumption that a company can continue to flourish…
ytc_UgztZjBfI…
G
Did you even watch the video, did you have your ears plugged?
Let me give you a…
ytr_UgyXDdpbC…
Comment
This is the problem with applying the term "artificial intelligence" to ChatGPT, or any large language model. "Intelligence," to most people, generally implies the ability to reason, but LLMs have _no_ ability to reason whatsoever, and no understanding of what they are writing. They simply look at the probabilities of words appearing after other words and generate new text based on those probabilities. (This is why it generates so many "fake" references; it's got no idea what a reference even is; it just generates text that looks like a reference. I've seen this with URLs as well.)
In essence, ChatGPT is a great bullshitter, and the "improvements" made from, e.g., ChatGPT 3 to ChatGPT 4 make it a better (i.e., more convincing) bullshitter without changing at all that it still does not reason or understand anything. It's being mis-sold as "intelligence," and that's going to lead to a lot more problems like this one.
youtube
AI Responsibility
2023-06-10T15:1…
♥ 47
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzDP5cMIxVWHMTRcS54AaABAg.9qmVCZq3bKc9qmZlBl_t2G","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzDP5cMIxVWHMTRcS54AaABAg.9qmVCZq3bKc9qmbYXoGwwJ","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzDP5cMIxVWHMTRcS54AaABAg.9qmVCZq3bKc9qmcaVf8LZD","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgyUVMvepJCnjgzxLRx4AaABAg.9qmV1pU3sM19qm_7Lmrcvc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxwfaoeeduUsRI7IVJ4AaABAg.9qmUzC3bhJH9qmd1ROOfRu","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzH-Ty46iRzvUjm9a94AaABAg.9qmUxEllL-e9qmdZWagYCQ","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzyiDqJgQKHvQdK2wJ4AaABAg.9qmUOsPQ3yV9qmYHIH4gRe","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgznD4DPA9Z9uUeLN3l4AaABAg.9qmU5i9Hqw09qmZ7vDG6RM","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgySRvqrU5UzfOdjkjl4AaABAg.9qmTnrsuoSl9qmVeyD5afm","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugxg-csX0RG1xeW3uJp4AaABAg.9qmTmfciaIP9qmgLjXUqtq","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]