Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I studied computer science and i have *some* knowlwedge about the process of AI learning for what common folk call generative AI. It is important to note that except the most basic neural networks to train AI with (perceptrons), every single type of AI, from your groks to a bot that can play pacman, follow a concept of "Hidden Layers", these are basically parts in the neural network where the data is being processed during training or use by the AI, the data could "mean" something that helps the AI accomplish the tasks, but it is practically illegible for human programmers, since the patterns of the data in the hidden layers are more akin to noise, The doesn't mean we don't know what are they doing, for decades actual computer scientists have created metrics to evaluate not just current AI's but also the "scaffolding" as the example programmer said, we already know that the current family of algorithms LLM's use currently. Transformers, have a certain limit of their capabilities of the task of generating coherent texts. For example there is scientific basis that hallucinations, or "Agentic Misalignments" are inevitable and can only be mitigated not erased. So, Analyzing large AI models is like analyzing the environment of a city, we can't know for sure 100% of the time what every single human is doing at a particular time and it's consequences, but we still know about civil engineering, ethnography and proper political policy thanks to how many social science's branches measure and observe a city and parse meaningful data from it.
youtube AI Governance 2025-08-26T18:2… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwBrwpZvCM9hKjlZvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyt_yFq4AhRb89AtrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwR1pya6aNEcCgvlqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZ2M5S3HJfxlM9WrJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgycCoiKapaGFWZFvOF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})