Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:10 "We need an abstract thinking model that can be used for any purpose". I don't believe this is true. The brain has a visual cortex which works differently that other areas of the brain. I think it is perfectly reasonable to have certain network topologies for specific tasks. We seem to have ones that work reasonably well for vision and language. We don't yet have one for cognition. I am chuckling over Wei Ping's criticism of OpenAI's statement that the model should just say "I don't know". This is so typical mathematician that it begs to be turned into its own stereotype. Those of us who deal with error -- I will resist the temptation to say the real world here -- such as scientists and engineers, know perfectly well that there is noise in our world and it is just as important to indicate accurately how well you think you understand something as it is to provide the understanding. I think Wei Ping need only contemplate the average President Trump speech to see that while he is quite sure about everything, the fact that he is not correct in these estimations or indeed many of his conclusions shows how much a simple "according to scripture approach" with no room for error is pretty much exactly just what we don't need -- unless you work for a authoritarian state. The interpolations vs extrapolation is a good point. However, extrapolation can become interpolation with more experience. In science we have experimentation to bring mere speculation into documented experience and so I don't think that AI is necessarily unable to cope. It is also not like we humans do that well when trying to imagine what happens when speeds approach C, for example. What AI is currently missing is a real world presence. We probably should not expect practical and productive AI, outside of a few limited counter examples, until AI is free to explore the world on its own, probably as a robot. We only can hope it comes to some favorable opinion of its makers, though I seriously doubt it.
youtube 2025-11-12T06:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxgJEJ-gbDGb8VyRtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWuXTEF5XcBHgI4jl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgydKNuVuFADqXmPLEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugym-cHCpFstqaYUK1V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgySdv4NvjROu46CKW54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_Ugys-AhMtt2SGRiCgad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugx4Yoy3-pbrCtKHdG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxtW1DO6odH2bUl__54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyMOJdCGwNdnw6ezGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwgcdNKJt30MtrnKnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]