Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
we need a way for the program to extrapolate and it must have a layer for self-correction. Humans do this automatically (just not all of us all of the time). It will also need a way to tell if something is ethical and moral in the context with which it finds the information. Humans are naturally critical of everything we see, everything we read or are told, and we have and inherent level of knowledge for what is right or wrong (most of us). Until a program can do these things there will not be AGI. Now LLMs are just probability machines, that means all answers are what is probably correct in a general sense. However it may not be factually correct to the details/facts.
youtube 2025-12-22T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxq7DOfILXcmtB9wGJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwxBaR_hDLHUlGjbVZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwEtG8t04RNKy6oS3J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx7pufsNm1it4fVXDR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyEmFB0_08wWRFkMrR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzuexsL6QizlmwQSTl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgyuIpkqQ0lJDSWcsER4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwTY_ojOdE8e2FFtJ94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugzh3-5JI-iIddVoxU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugytda5P2ZTHMxXQXO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}]