Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@greenaum It's not a "popular" explaination it is literally the truth. A language model is a function with a set of parameters which are optimized to minimize cross entropy between output log probabilities and the correct next token assumed at probability 1.0. It turns out that to predict tokens well, you develop "few shot learning" capabilites, as in learning that goes beyond the first order optimization algorithm used to train the parameters, but actually use the current context as state to "learn new skills" (that turned out to be false actually). This was always loosely defined ever since the GPT2 paper how "powerful" this fewshot learning actually is, but it turns out that it is neither meta learning as many suggested nor applying a set of algorithms learned during pretraining encoded in the parameters. It is quite a bit weaker than many people dream it up to be. There has been a great outcry whenever people downplay what language models actually do because of the hype surounding it, but I have to admit that we have been fooled. It was always doing the same thing as the small models, just on a bigger scale. But the shift in quality was so profound, people dreamt up ridiculous explainations.
youtube AI Moral Status 2024-07-26T16:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugwf-hZ3xtBw41T76rd4AaABAg.A6L6xfmya2kA6LMEOheRPV","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxnduCsu3QdGtnXbsN4AaABAg.A6Ky2kEN2K0A6MKVdw9dfj","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxnduCsu3QdGtnXbsN4AaABAg.A6Ky2kEN2K0A6MK_fksTeo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxnduCsu3QdGtnXbsN4AaABAg.A6Ky2kEN2K0A6MaNuhcPmj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwrPo8H_c5fit-zQqd4AaABAg.A6Kr4cxoiZ4A6PN_TRDK7u","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxQGeuz6AD6dejQDhN4AaABAg.A6Kph7BerT3A6NJFqfqmoY","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzXwsBi2X-BWRsM6eF4AaABAg.A6KnXnF_sAKA6MQUjPWiEv","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzXwsBi2X-BWRsM6eF4AaABAg.A6KnXnF_sAKA6NEfvNcS5Z","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzXwsBi2X-BWRsM6eF4AaABAg.A6KnXnF_sAKA6OoFkzB3dL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwQz3dbY6YCNJjYRzF4AaABAg.A6KlnDe7hmNA6LLVk9ORmD","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]