Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it just me or does "underrepresented minorities are less likely to be offered…
ytc_UgwJi3DAo…
G
Your vision represents the definitive "graduation" from the Attribute OS (State …
ytr_Ugycs0vlk…
G
I heard that upper management all dislike AI in my company even if it saves them…
ytc_Ugzmgpumh…
G
If enough people believe in an outcome it will manifest as our future . So if Ev…
ytc_UgyCMAnun…
G
Neil couldn’t be more accurate. It will really only do what we program it to do.…
ytc_UgyrBzEAF…
G
The fundamental problem I have with that is that we are trying to make an algori…
rdc_h8g4uyh
G
How bout we don't make robots at all, i mean if we are going to give them rights…
ytc_UgzuZtDlq…
G
Wisdom ='s been there, done that and learned. Wisdom comes from actual experienc…
ytc_Ugywb5sMg…
Comment
A video about headlines with an even more catchy headline is going to be short on truth and drowning in hype.
For starters, the LLM \ transformer Architecture at core is just a database of word relationships that were scanned from source text. We then started noticing that similar sentences produces similar concepts and ended up with feature formation through distributional semantics, which is techo talk for 'different sentences may have the same contextual content and this produces neural net notes that weigh more heavily based on their relationships. This is just piles and piles of data processing. We then stick a language module on the front to turn the numbers back in to the original words and then format the keywords in the prompt response in to proper sentences.
All the claims you cited as AI developers not 'understanding' AI is bunk. Who do you think built all this? Possessed programmers typing in devil symbols? What they didn't 'understand' initially was that there was extra statistical complexity popping up because related concepts were collapsing in the same way as related words, because the concept being communicated in english was the same. This is what it means when an AI dude says an LLM can 'think in concepts', it means the training data algorithm merged a bunch of number in to another bunch of numbers because they both looked the same.
So, why is this hard to understand? Because the 'word relationships' I referred to, known in the industry as 'parameters' number between 175 to 405billion. It;s hard to 'know' or have confidents that you can comprehend a database with 400 billion numbered weight values.
So, there is no reason for you guys to go dull brained enough to fall for the fantasy takes because the lingo is hard to understand. Now you KNOW how it works, it's not magic, no demons, just a big pile of processed numbers and a language processor that turns a bunch of words in to a sentence.
youtube
AI Moral Status
2025-12-11T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwB0CW4-CSjJN0OLoV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxC7DazmZf1ubejeBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTnvSzUwiF03lYR194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugztmegl2wsohvspf0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRAJ330_KcVguWyHJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzjtV2obGjkG627nr14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIfW4eQHuW6-Uk_PZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzD9Vm2dSEZNB8EspR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyh1M2hJzR6RxDjBCN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLm7rrMG1_rDWJlf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]