Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Modern neural networks are built around one main idea. Given input X, predict output y. For chat-bots, X is a sequence of tokens (numbers representing word or image fragments) and y is the probabilities for the next token to appear in that sequence. The bot then uses those probabilities to pick the next token, adds it to the sequence, and repeats until it reaches some type of <stop> token. The math that the models use to calculate that next token are complex, but well documented by researchers. The problem comes from the billions of parameters that go into the calculation which are all determined and refined by a high-speed trial and error loop that we call "training". What a chat-bot tells you depends on the examples of data that it was trained on, and its prompt (instructions or examples that get put into the starting sequence of tokens). Training is time consuming and expensive, but we can build prompts for specific requests and fill them with verified documents or Google search results, or non-mainstream sources and Elon Musk's opinions in the case of Grok.
youtube AI Governance 2025-08-28T00:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwGcqiUSu8cYDEti-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwHYRPfGwNlHXWXURR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxV20I5bW-QpU2dx954AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2SmVzK217NOCMDNl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNTV-vFGbfLC-Zoa14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]