Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My own experience with Chat GPT reveals what might be in store for humanity. Aft…
ytc_UgxqN3eWI…
G
@imnugget8085 I am not pretending I know the future.im just saying that with the…
ytr_UgzqC8lev…
G
I don’t think even AI could make sound arguments for the existence of God.
You’r…
ytc_UgzQbCIlp…
G
If a level 2 system is not used properly, it is the driver at fault. Of course a…
ytr_Ugx_tLvU6…
G
Wikipedia is the best alternative to the ai overview. Sure anything can be on th…
ytr_UgxgcfHsi…
G
@guitarskooter I understand it perfectly, just doubt the motives and sudden push…
ytr_UgyvEagXl…
G
This is what I'm talking about I mean seriously people need to stop using AI wel…
ytc_UgxzbfwRq…
G
Isn't it normal for traces of antibodies to only last a few months? The real qu…
rdc_g9tb1mc
Comment
Subject matter expert in AI risk here!
It isn't "flawed" exactly, but it isn't what we want. An AI system can be very highly capable and not care about the things that we care about. There's a big difference between an AI system being capable of pursuing our goals, and actually having our goals in the first place.
You're right that there is often a lot of projection and anthropomorphism, which is hard to get away from and in some ways can be necessary in order to communicate with a lay audience. But most of the object-level research on AI and its risks does not rely on anthropomorphism. (I recommend the AI Safety Info wiki if you want to learn more about the science behind what Dave presented in the video!)
One key thing to grasp here is that we do not directly design AIs. We design their learning algorithms; those are made of code, and can be compared to designing the principle of evolution. But we don't get to decide exactly what kind of creature pops out the other end. Modern AI systems are not made of code and are not programmed. They are giant neural networks, where the billions or trillions of connections between digital neurons are represented by giant matrices of floating point numbers.
A lot of people naturally find this next bit challenging: what does it mean to know or understand? If you ask an AI a question and it answers correctly, doesn't it know the answer? And if you give it a test of hard, complex math problems it has never seen before and it can do them, doesn't that demonstrate that it understands something about math? Empirically, it sure seems like AI systems know and understand quite a lot.
(Consciousness is a red herring. You understand a lot of things that you are not conscious of, and you are conscious of a lot of things that you do not understand.)
I define "understanding" as a compressed map that reproduces the territory. Even if a system is only trained to spot patterns, if it can spot deeper and deeper patterns, it is gaining more and more understanding. That is exactly what happens in the pre-training phase when training LLMs. First, they tried to memorize their training data. Then when that isn't good enough, they pick up on grammatical patterns. Then when that still isn't good enough, they discover the actual meanings of words, storing those meanings as abstracted high-dimensional vectors in the model weights. And not just the meanings of individual words, but of complex concepts. Eventually that progression results in fairly deep understanding, at the level of a mediocre expert in every field at once.
youtube
AI Governance
2025-08-28T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzovDY7oF-khB_V0fh4AaABAg.AMN8EQyFQUMAMO5Mr2eo4q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwddSULtVvyUV-d81p4AaABAg.AMN61Jw5f1lAMO0xv7QCnB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugxf4oPmhJ2BrT_4Z614AaABAg.AMMdu8SidxgAMNaf9HLL1W","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugxf4oPmhJ2BrT_4Z614AaABAg.AMMdu8SidxgAMNzT7MxbEU","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxf4oPmhJ2BrT_4Z614AaABAg.AMMdu8SidxgAMOVf7wIyCA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwwfoAtxRfbk5G3V_p4AaABAg.AMMdIcsvHrfAMNw2ODbnQX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyuWqPZ_yPzDwekjup4AaABAg.AMMcOQbeXc-AMNZs1QXdu_","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyuWqPZ_yPzDwekjup4AaABAg.AMMcOQbeXc-AMNpfc4tvbx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMN7Fjdux-8","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMNdTIRMiQs","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]