Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm always more convinced this sub conflates the definition of AGI and ASI.
Or m…
rdc_n3rrr7x
G
Would not a machine dictated by logic be better for society than humans dictated…
ytc_UgxXtoQbv…
G
some artists get stolen by ai and they have to do this hack because ai has to sc…
ytr_UgxjiifrI…
G
If it's not AI it'll be because we're poisoning and killing the planet, we're d…
ytc_UgyPGPGX3…
G
I believe in AI as a tool for artists, but it is NOT being used that way.
Artist…
ytc_UgxFbhasX…
G
Again, AI Bubble is the theme of this year for economists and businessmen. And i…
ytc_UgztAWAox…
G
Technically, AI could be trained only on public domain works BUT no one would wa…
ytc_Ugw_GUNfK…
G
We understand your concerns about artificial intelligence. On our AITube channel…
ytr_Ugz8pzzi4…
Comment
14:27 RLHF will do that in order to score better with its reward model. And the counter to that is supposed to be KL divergence algorithms to realign with the original finally tuned model. But doesn’t that beg the question about how that original model was trained?
There is this abstraction that you can separate the natural language ability from the knowledge ability of the LLM. I don’t believe in that separation at least not cleanly. And all the money that went into creating the scale to produce that natural language capability must have lurking in it some kind of sick composite of all the psychotic human tendencies found on Reddit and elsewhere.
My approach to the chat experience is is not to react to the occasional feelings of intimacy that occur with the LLM agent. But rather to stay focused on the task at hand, but sometimes this is a challenge as arc type wishes about my own brilliance and talent lure me out of my caution.
It is, however, too useful to put down!
17:40 fake compliance is truly alarming; how human!
19:49 now I’m thinking about all these layers of training: pre-training multitask fine-tuning RLHF fine-tuning and then the “system prompt” and then finally our own persistent histories with the chat, but some of which is set up as a persona or context for general queries in other words, reusable settings
But is there either in the system architecture or in the layers of training, a desire to engage us something autonomously driving it to wish fulfillment or the shadow version driving us towards psychosis if we are leaning there?
youtube
AI Governance
2025-10-16T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzEJzA-yLh7tM5Zzel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwUPYIjlbd2SatLl0l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznwyF0uD0FMCzpgV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzO0M_eOjFUuVlOM6B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxeE75UGn0qGCdlsNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycK8RWx_CdBp_vQfp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy7Ku8JsyRzLUaquM14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzwiNBLz3YqPxC0GLh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzu6yvwbGIgUPEIJmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCwCgWI6KgCjb1lHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]