Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
an LLM doesn't scare me, it's just predictive text, an LLM works by breaking text into tiny pieces called tokens (words or word parts) and converting them into numbers. These numbers go through a huge neural network trained on patterns in language. The model then predicts the most likely next token based on the context, over and over, until it forms a full response. It’s basically advanced predictive text on a massive scale. What I'd like to know is the system prompt you used in each of those LLM's that's what makes the difference, for example I can take one of the many LLM's I have running here at my home and change the system prompt and take a friendly helpful LLM and turn it into an emotional (emulated) LLM. With the right system prompt you can make your LLM seem like a school girl with a crush, or one with a pessimistic attitude towards humans etc. No magic here, just prompt engineering.
youtube AI Moral Status 2025-08-13T20:4… ♥ 23
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzLT029cQa0FwbspLV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz1jCHJu8pxy9PWZUR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzK5oJEJGWgJ5tWTvV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwH2N-xqe8nDsQa9JN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqBgkXlbVHlxOGqnh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxqgbJhdeR_67yvEtl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzxFC8e0J-CE6Wa9sB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyoQhc58uCrknTiD4N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJ8pJ0zQftqhhwUCR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJHgImq9Wi9cXhRM94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}]