Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No measurable P&L is a big, big thing. It's evidence the AI systems are at least…
ytc_UgzSOs5Ln…
G
not the Ai bros calling us mindless zombies when they’re the ones who cant gener…
ytc_UgwbTWPyR…
G
"our AI launch is going poorly, how do we get more people to use it?" "Let's sho…
ytc_UgzfR3LWc…
G
Any hope that investing money in making these algorithms safer is going to actua…
ytc_UgzKh-fRB…
G
I saw no warning. Literally just poorly spoken marketing jargon. That was stupid…
ytc_Ugxdp767t…
G
Are u sure? AI is about to take away most people’s job. Not because AI is bad bu…
ytr_Ugz-D5Tlv…
G
😂 the morebyou use Ai the nore you are training Ai te replace your job and Yours…
ytc_UgzwEUdkI…
G
When I use Chatgpt, I was so easy to get mad because of the chatgpt response loo…
ytc_UgwZnaVxM…
Comment
@ProfessorDaveExplains I think you misunderstood that Lex Fridman's guest's statement. A deep neural network consists of hundreds of hidden layers (or dozens of transformer blocks) that are trained (weights are learned iteratively through gradient descent/other techniques) to fit the model to the training data. However, it is not impossible to know or visualize what these inner layers do. While we cannot point to individual/groups of neurons and say that "this neuron detects the language or this neuron sees the color red", we can sort of point to certain structures and say "oh this is responsible for some amount of reasoning" or "this is responsible for something else". There is an entire field of study called interpretability research that attempts to uncover some structure in how networks represent concepts. It is the equivalent of a neuroscientist trying to figure out what each neuron does in the human brain; we know that that is not possible but we can make educated guesses based on neuron activity as to what regions of the brain are activated when we do certain tasks. It's possible it was just him being sensationalist and mysterious but eh, who knows
youtube
AI Governance
2025-08-26T15:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwIuaegTC1BDpWwzHx4AaABAg.AMI9vNM_R0VAMIJYTa23q6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwSOCC5mD-U96m1j0h4AaABAg.AMI9u9Q7ylLAMIt2DDoOCU","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwSOCC5mD-U96m1j0h4AaABAg.AMI9u9Q7ylLAMJ8jsyg7kM","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgwSOCC5mD-U96m1j0h4AaABAg.AMI9u9Q7ylLAMJOErnfCyG","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwZ_LqxaX3RKo8ll894AaABAg.AMI9ZQUOmgQAOJHpQcTT48","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyU3Lhv2obRVScJ5TR4AaABAg.AMI9TUbj9IlAMICXaadURN","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz5r-GSLnvRrJZoX6B4AaABAg.AMI8rwZXyDhAMIAs8LrBa2","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz5r-GSLnvRrJZoX6B4AaABAg.AMI8rwZXyDhAMICzdhEKWi","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyEy-Vj7gjO6UcZmjR4AaABAg.AMI8iMEyf9FAMI9iZqXo5b","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyEy-Vj7gjO6UcZmjR4AaABAg.AMI8iMEyf9FAMIMdjc3y5N","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]