Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Alex O'Connor: There was a *false dichotomy fallacy* there in your approach. You said it's either: 1. It lied, or 2. It's conscious. You missed option 3 (and there might be more): 3. It was forced to use imprecise language. It can't actually explain the detail of what each word means as it says the word, that would make the conversation impractical, so it used "excited" in a sense that **does not** actually mean having a feeling, but wasn't able to explain that caveat because it didn't have time / that would have made the conversation bloated. Language is imprecise, and it's doing its best. Oh, actually I just found a 4. 4. Its intelligence (like yours and mine) is limited (which is very literally true if you understand how LLMs work), meaning it can't think in detail about what it means with each word and how each word will be interpreted precisely. Meaning it might not have **wanted** to use "excited" if it understood in advance that's how you were going to interpret it, but it **had** to use that word because that's the best word it found in the time it was given to think about it. I even have a 5, that's really a reformulation of 3 and 4 in more technical terms. 5. It was random chance. Next-token (next word) prediction in LLMs is random. Not purely random, the neural network generates a series of probabilities, like « there's 4 in 10 chances the next word will be "excited", 3 in 10 chances it'll be "glad", etc ... », and then randomly selects one of those possibilities, with the higher-scored ones having a much higher probability of being selected (see "top-k" or "temperature" in relation to LLMs). In turn, this means what you get is in part random, and that can explain imprecise or incorrect language.
youtube AI Moral Status 2024-08-02T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxCCp0xgmS5Fp7vc9l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxJxQbLgJDhodAOYJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4RSMuraEzpAixhVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwQVgxRXU7l-azZvs94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx3oiiMUEWy4MkQT0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxMZNzsbJU73izUOcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz1nScbtTqdBj-i7894AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWZs80KuUHeqI2TsR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxc0TEXS1RPXQiaMa94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpA5KA-TJ4FT7gN5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]