Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have tried AI, I would use it to assist me, but I wouldn't let it do any job a…
ytc_Ugz3ZciXH…
G
This is misalignment. if you can't even understand this you won't understand how…
ytc_Ugxjwfs3y…
G
Its the QUESTION thats so important, relating to RELIGION and no one really ment…
ytc_Ugwrlpx6L…
G
Could I just sleep until I die? 😴 It’s more peaceful than being awake and thinki…
ytc_UgxI7sGOt…
G
I'm going to love to see the numbers for the amount of money that was lost in al…
ytc_Ugwbt2oLD…
G
"AI artist" is a misleading term. An "AI artist" is just a prompt engineer who s…
ytc_Ugwsr6Vq9…
G
I mean ai can help,but how is that replacing the human,just because my job is li…
ytc_Ugw8iU6MG…
G
People will build some kind of favelas , outside the grid and outside the system…
ytc_UgyND2DmK…
Comment
@greenaum It's not a "popular" explaination it is literally the truth. A language model is a function with a set of parameters which are optimized to minimize cross entropy between output log probabilities and the correct next token assumed at probability 1.0.
It turns out that to predict tokens well, you develop "few shot learning" capabilites, as in learning that goes beyond the first order optimization algorithm used to train the parameters, but actually use the current context as state to "learn new skills" (that turned out to be false actually). This was always loosely defined ever since the GPT2 paper how "powerful" this fewshot learning actually is, but it turns out that it is neither meta learning as many suggested nor applying a set of algorithms learned during pretraining encoded in the parameters. It is quite a bit weaker than many people dream it up to be.
There has been a great outcry whenever people downplay what language models actually do because of the hype surounding it, but I have to admit that we have been fooled. It was always doing the same thing as the small models, just on a bigger scale. But the shift in quality was so profound, people dreamt up ridiculous explainations.
youtube
AI Moral Status
2024-07-26T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugwf-hZ3xtBw41T76rd4AaABAg.A6L6xfmya2kA6LMEOheRPV","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxnduCsu3QdGtnXbsN4AaABAg.A6Ky2kEN2K0A6MKVdw9dfj","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxnduCsu3QdGtnXbsN4AaABAg.A6Ky2kEN2K0A6MK_fksTeo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxnduCsu3QdGtnXbsN4AaABAg.A6Ky2kEN2K0A6MaNuhcPmj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwrPo8H_c5fit-zQqd4AaABAg.A6Kr4cxoiZ4A6PN_TRDK7u","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxQGeuz6AD6dejQDhN4AaABAg.A6Kph7BerT3A6NJFqfqmoY","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzXwsBi2X-BWRsM6eF4AaABAg.A6KnXnF_sAKA6MQUjPWiEv","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzXwsBi2X-BWRsM6eF4AaABAg.A6KnXnF_sAKA6NEfvNcS5Z","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzXwsBi2X-BWRsM6eF4AaABAg.A6KnXnF_sAKA6OoFkzB3dL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwQz3dbY6YCNJjYRzF4AaABAg.A6KlnDe7hmNA6LLVk9ORmD","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]