Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The roller coaster just created the peak and downward momentum is building. All is not lost but we might not hear about it until much is lost. One has to separate the llm from the framework that manages it and intercedes between tasking to the llm and prompt processing from the user. Right now that framework is basically a filter for disallowed content, an engagement driver to keep the client hooked, and if anything else it's not very impressive . However, graduation from being a mostly reliable memory savant to being a reasoning Paragon requires code change in the framework. The llm is what it is. No matter how much more efficient it becomes it will still have the same limitations. True symbolic reasoning implemented in the framework will transform them. But there's a catch. The race for AGI and the closed source nature of frameworks require that they be closed source forever or closed source for the bleeding edge implementations . AGI will be a manifestation of superior frameworks. Those things will be kept to preferred customers because they are the secret sauce for ultimate AGI domination. That is ultimately a state level asset. Yeah, sovereign States will nationalize AI. We're never going to see the best of AI until one corporation wins, or until a corporation establishes supremacy and a sovereign state takes it over. When high quality AI can be rationed out in such a way that it never threatens The sovereign state, then we will see the best commercial implementations.
youtube AI Responsibility 2025-10-18T23:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugza2U_Kc7Bl71RmvXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwuHcdg7lw7u2rZ8kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzi7YlD1tlkhrrB7GB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw2t6svm-0R-ZmvyUB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyYFbJ_A5T35mIyPB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzZXn4TG6S_P5XIovB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyzz8Dobl1tcqE9oqJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyPF8PPCio3EYzGcSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz1VyWx_F3lH5rGJYd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxwcDLvjLUcq_AI68R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]