Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing with that particular "hallucination" "how many strawberries in R" is it's almost like it's compensating for the user mis-speaking "how many R's are in strawberry?" Almost as if it thinks the person is dyslexic or something. While I question whether AI's can truely become intelligent (by our own determination of intelligent). . . I have thought for some time that there needs to be a kind of over-mind or another routine of some kind that examines possible answers and also either another thought center that can bring differing ideas together to combine them, or build it into the over-mind somehow. Sorry, I lack the vocabulary to explain my idea. When you feed an AI a bunch of books, by Albert Einstein for example, it teaches the AI some things, but it does NOT teach the AI how come by those things itself. In other words, it doesn't make the AI an Albert Einstein. Which is in itself an interesting case in point as Albert explained how he formed his thought process. The trick is how do you make a machine that thinks like an Albert Einstein and not just spew out knowledge it was fed. AI's are amazing, but you must remember (at least in their current incarnation) that they are basically over-glorified databases. (possibly understated.. =) ) I'm not saying their bad, I'm just pointing out...
youtube 2025-11-07T21:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyPENHm8nttBYyog8x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVMlPN66H4ujPYBwx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0bVcznkDOms2rEzF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxofFY44puJdtb-ixl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzanZnHhANC7GhGbRV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBId35OZPaLaOAmV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7qzFp2MGrAS1AcpV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwNPYZC7pSTwHWMnUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxa4KLoV8bx0nh-iGx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTQZzqUx7F2rXrskd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]