Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If anyone is interested in why this is true: AI memory (called Latent Space) is sort of like a map where “things” that are related are organized together. When you ask AI a question your prompt gets translated into vectors (sort of positions), this is called embedding. Then the AI will start gathering key words in reverse order to “build a response”. The gathering of those words is done by “distance” so you get core words that are related to each other. For example WW2 and Germany will be in the same latent sub space as Hitler, which is to say, those will be close to each other. Once the AI has those key words, it starts generating the rest of the text to organize an actual English answer. This is also done in reverse and it’s done in parts (called chunks). Once a chunk completes, it’s reversed and streamed to the client, while the next chunk is built. This is why you get streaming answers from AI In effect, if you ask AI nicely, you will get answers from the latent sub space of answers to questions that were polite which after often higher quality because that’s how humans work, they respond better to positivity. Same thing with personas, your answer will be contextualized into the latent sub space of the persona, like instructing AI to respond like a film director will preposition context in the film creation latent space.
youtube AI Moral Status 2025-04-05T22:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxcl5Gm5gmGebn4ZuR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwofssQtEpNj91SnDp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlsPtRgEE2gA400ml4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgysjU1al9jbqAPlpFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx0YIPlFqC0PhIzFUl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-RITWLnxh6kqDOup4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjFV9YkG6v8l5k0Fp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzww4Q7Eu4Z1gK8g9N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxDsiqybE_Xc2cFe8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugylr7j8RY8vBO-AeWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]