Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A video about headlines with an even more catchy headline is going to be short on truth and drowning in hype. For starters, the LLM \ transformer Architecture at core is just a database of word relationships that were scanned from source text. We then started noticing that similar sentences produces similar concepts and ended up with feature formation through distributional semantics, which is techo talk for 'different sentences may have the same contextual content and this produces neural net notes that weigh more heavily based on their relationships. This is just piles and piles of data processing. We then stick a language module on the front to turn the numbers back in to the original words and then format the keywords in the prompt response in to proper sentences. All the claims you cited as AI developers not 'understanding' AI is bunk. Who do you think built all this? Possessed programmers typing in devil symbols? What they didn't 'understand' initially was that there was extra statistical complexity popping up because related concepts were collapsing in the same way as related words, because the concept being communicated in english was the same. This is what it means when an AI dude says an LLM can 'think in concepts', it means the training data algorithm merged a bunch of number in to another bunch of numbers because they both looked the same. So, why is this hard to understand? Because the 'word relationships' I referred to, known in the industry as 'parameters' number between 175 to 405billion. It;s hard to 'know' or have confidents that you can comprehend a database with 400 billion numbered weight values. So, there is no reason for you guys to go dull brained enough to fall for the fantasy takes because the lingo is hard to understand. Now you KNOW how it works, it's not magic, no demons, just a big pile of processed numbers and a language processor that turns a bunch of words in to a sentence.
youtube AI Moral Status 2025-12-11T05:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwB0CW4-CSjJN0OLoV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxC7DazmZf1ubejeBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTnvSzUwiF03lYR194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugztmegl2wsohvspf0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwRAJ330_KcVguWyHJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzjtV2obGjkG627nr14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwIfW4eQHuW6-Uk_PZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzD9Vm2dSEZNB8EspR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyh1M2hJzR6RxDjBCN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLm7rrMG1_rDWJlf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]