Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is the problem with applying the term "artificial intelligence" to ChatGPT, or any large language model. "Intelligence," to most people, generally implies the ability to reason, but LLMs have _no_ ability to reason whatsoever, and no understanding of what they are writing. They simply look at the probabilities of words appearing after other words and generate new text based on those probabilities. (This is why it generates so many "fake" references; it's got no idea what a reference even is; it just generates text that looks like a reference. I've seen this with URLs as well.) In essence, ChatGPT is a great bullshitter, and the "improvements" made from, e.g., ChatGPT 3 to ChatGPT 4 make it a better (i.e., more convincing) bullshitter without changing at all that it still does not reason or understand anything. It's being mis-sold as "intelligence," and that's going to lead to a lot more problems like this one.
youtube AI Responsibility 2023-06-10T15:1… ♥ 47
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzDP5cMIxVWHMTRcS54AaABAg.9qmVCZq3bKc9qmZlBl_t2G","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzDP5cMIxVWHMTRcS54AaABAg.9qmVCZq3bKc9qmbYXoGwwJ","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzDP5cMIxVWHMTRcS54AaABAg.9qmVCZq3bKc9qmcaVf8LZD","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgyUVMvepJCnjgzxLRx4AaABAg.9qmV1pU3sM19qm_7Lmrcvc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxwfaoeeduUsRI7IVJ4AaABAg.9qmUzC3bhJH9qmd1ROOfRu","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzH-Ty46iRzvUjm9a94AaABAg.9qmUxEllL-e9qmdZWagYCQ","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgzyiDqJgQKHvQdK2wJ4AaABAg.9qmUOsPQ3yV9qmYHIH4gRe","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgznD4DPA9Z9uUeLN3l4AaABAg.9qmU5i9Hqw09qmZ7vDG6RM","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgySRvqrU5UzfOdjkjl4AaABAg.9qmTnrsuoSl9qmVeyD5afm","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxg-csX0RG1xeW3uJp4AaABAg.9qmTmfciaIP9qmgLjXUqtq","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]