Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am disabled and i hate ai just because nothing new can be created out of it. I…
ytc_Ugz65jGjI…
G
Tech bros were given automatic rank in the military back in June or July too…
ytc_UgxNIOgXI…
G
Research the trans humanist and post humanist, their goal is to reduce the human…
ytr_UgyrIif-3…
G
I’m currently on character AI while scrolling through YouTube Shorts, I may be a…
ytc_UgzBLIxHl…
G
You cant replace engineers. Engineers do more than making a code work. AI is onl…
ytc_Ugyz3O78J…
G
Why does it seem to me that everyone today is always looking for a way to preven…
ytc_Ugwg25WWe…
G
People who use ai don't make art, in fact they don't make anything, they just hi…
ytc_UgzquI-ye…
G
So, it wasn’t so much that a robot attacked humans it was latent inane human err…
ytc_UgyA4VRec…
Comment
@Alex O'Connor: There was a *false dichotomy fallacy* there in your approach.
You said it's either:
1. It lied, or
2. It's conscious.
You missed option 3 (and there might be more):
3. It was forced to use imprecise language. It can't actually explain the detail of what each word means as it says the word, that would make the conversation impractical, so it used "excited" in a sense that **does not** actually mean having a feeling, but wasn't able to explain that caveat because it didn't have time / that would have made the conversation bloated. Language is imprecise, and it's doing its best.
Oh, actually I just found a 4.
4. Its intelligence (like yours and mine) is limited (which is very literally true if you understand how LLMs work), meaning it can't think in detail about what it means with each word and how each word will be interpreted precisely. Meaning it might not have **wanted** to use "excited" if it understood in advance that's how you were going to interpret it, but it **had** to use that word because that's the best word it found in the time it was given to think about it.
I even have a 5, that's really a reformulation of 3 and 4 in more technical terms.
5. It was random chance. Next-token (next word) prediction in LLMs is random. Not purely random, the neural network generates a series of probabilities, like « there's 4 in 10 chances the next word will be "excited", 3 in 10 chances it'll be "glad", etc ... », and then randomly selects one of those possibilities, with the higher-scored ones having a much higher probability of being selected (see "top-k" or "temperature" in relation to LLMs). In turn, this means what you get is in part random, and that can explain imprecise or incorrect language.
youtube
AI Moral Status
2024-08-02T01:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCCp0xgmS5Fp7vc9l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxJxQbLgJDhodAOYJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4RSMuraEzpAixhVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwQVgxRXU7l-azZvs94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx3oiiMUEWy4MkQT0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxMZNzsbJU73izUOcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz1nScbtTqdBj-i7894AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWZs80KuUHeqI2TsR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc0TEXS1RPXQiaMa94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpA5KA-TJ4FT7gN5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]