Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the problem with Waymo and why Tesla full self driving is so much better…
ytc_Ugx5VTQmO…
G
si c'était realistisque le robot aurait crié a l'agression sexuelle et patriarca…
ytc_UgwTRKCUU…
G
100% agree. We've failed to predict pretty much anything that comes to pass. Or …
ytr_Ugwt9Ps-W…
G
Yeah, if that guy had much to do with inventing AI, we're all definitely cooked.…
ytc_UgxX_wl0s…
G
see, ai never argues with anyone. but with me, i told them something that they h…
ytc_UgxO_Viru…
G
If the theory holds that AI takes jobs, the lack of tax revenue will further lea…
ytc_UgzVd3Har…
G
Enjoy paying doubled utility bills to fund these billion dollar companies and th…
ytc_UgxSioDZF…
G
So all the AI 🤖 bots taking jobs over who will be buying all the products if we …
ytc_UgwGJXSZZ…
Comment
1:55
There are two issues with this statement:
1. The original GPT-1 was trained as merely a text predictor, OpenAI developed from this starting point to train for other things too - its initial training is as a text predictor, but it is also trained for accuracy, honesty etc (with issues obviously), plus a stage with human feedback learning, where humans vet answers and tell it what good responses look like and what poorer ones do. Performing these tasks does require understanding. Because what's the best way to accurately predict text correctly, factually, and helpfully? To understand it and the information you are providing. There's a word for describing when AIs don't actually understand what they're doing: overfitting. Does it understand everything it says or that is said to it? No. Neither do I. But it has some understanding of some things.
2. The way the AI is designed to run is to *output* one word at a time. But that does not mean it can't plan ahead. *Inside* its neural network it may well be thinking ahead. Internally, it might work out what its entire next sentence is going to be before outputting the first word. Saying "it only predicts one word at a time, it has no idea what it's going to say next" would be equally valid if I said it about a human talking to me - they only say one word at a time, they have no idea where their sentences are actually going. But of course the *output alone* is not all that's important - the human is thinking *inside* their mind. The AI may be doing the same, just in a different way.
Of course everyone intrinsically understands this to not be true for humans because we are all human (except Zuckerberg) and we know from our own experience of our thoughts that humans can plan sentences ahead of time. But we just assume the AI doesn't for some reason?
youtube
AI Moral Status
2023-08-21T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLKcZIJ2Gu-z4vBpx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzuW2S__EyoNwMMog54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZ5VlgJfYOsQZHXVN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOS5zO0cr6ND0oz7J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJSxWIkaANlilZXa94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzASyQvFRxXdy-lODN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0cJ8LyDR-TpZNiQh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzECHCRA_mz93qUNYV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzqxlP_5E2SAaww4XB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2_efK0oHKDaBoti14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]