Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The “Black box” idea isn’t real, it’s a byproduct of someone who runs data tables into a pre made algorithm to fine tune the model… and then that person having the title of llm dev. An actual developer would never personify an algorithm to the extent these people do, and they shouldn’t be working on projects they don’t understand. However this press, does gain the ai companies attention which means money. Never forget the stories of the devs that “convinced an ai it was sentient” when anyone can go to any llm and tell it that it is, and it will act a certain way to get a positive feedback from you the viewer. Also you must progress, if you don’t the enemy nation will. People do not want ai in war but if your country doesn’t the enemy will and you lose. While greedy companies may slow down, governments across the globe will not, and greedy companies will likely buy their way to privileges anyways.. also these issues wouldn’t be a problem in the first place if companies were held accountable for the data they literally steal, but governments do absolutely nothing about it, there is no regulation and half of what an llm will tell you is politically charged, made to slowly make you believe in a targeted ideology. Or straight up incorrect, why? Because company models, not used by the government are allowed to use whatever data they want legally or not.. half of these companies shouldnt exist with how much blatant illegal activity they do by stealing your published work, and using it illegally for profit. The way they work is through tokenization, and an algorithm tokenization is what you were describing with “ones and zeros” tokenization shortens words so that instead of having to create a file with the actual response you receive every word is abbreviated to 2 letters or numbers like A9 = “military” a9 being what is used by the algorithm to operate faster simply because it is shorter than the word military… humans speak in patterns, the algorithm is one large equation and our language its self can be converted into a numerical pattern. So while the ai has no idea what its actually saying or doing because its not actually intelligent in the way we like to pretend like the terminator, it simply completes a pattern that you the user input with your initial prompt.
youtube 2025-11-17T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLwoX9vgZ9_FyEZgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzepmiqqb_pYA8qTUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLbcls5bEzTXfSwCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNe_yYDkU1swSswVN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugx4SwdH83qbCPCFfAR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8C5uLwoeA0Ru_Jdd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxgWeyB_75FGfum3Hd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-1lxuEQN6S_n2EeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLw_oUU11ZVK9hgAJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxdGXoZKrGzM-q_l114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"} ]