Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think if we made google to be “safe” we can do it with ai just longer. but hum…
ytc_UgyIiNTGf…
G
AI for now can create something new, real good or can create one end to end proj…
ytc_UgzPakUu1…
G
The most frightening question about AI is...'If it is so terrible, why don't we …
ytc_UgzFJV-9f…
G
@RatherShabby Like with any industry affected by automation, I think that sensi…
ytr_Ugwhj4exX…
G
Yes WE ELDERS , DO KNOW WHAT IS COMING. SOME OF US , WROTE BOOKS ABOUT SCI FI …
ytc_UgyuWKycp…
G
Having them actually learn what it takes to set up a business, seems weird that …
ytc_UgxBIiTI_…
G
Did they just let the software make the arrest for them, or what? That's not how…
rdc_jv6vfnf
G
Funny thing, couldnt you even argue the other way around? Why should an robot le…
ytc_UgwagoUoJ…
Comment
I think far too much of the decisions on AI are being considered as if we have an actual understanding of how things work and that we have a much stronger influence on how they behave _in human terms_. Hallucination is an example of this. I think the use of the term misleads people into thinking of it as an event, when I think the reality is that what we're training AI to do is to choose tokens as if it understands things, and rewarding it based on how much it _appears_ to understand, and I think the clearest demonstration of reasoning, is coming to an accurate conclusion when you don't have all the info in your training.
I think we're essentially saying "Make something up that sounds convincing." And the better known something is, and the more data on the subject in the training, The easier it is to make it convincing, using facts, and the more likely it will give a "good" answer.
The less info it has on the subject, the more likely it is to be incorrect. _But_ the more believable the lie, the more likely we'll reward it as a "good" answer.
Additionally, even in our perceptions of what makes an AI "better" is when it can come to the right conclusions with relatively limited information. It's not about looking up exact things in wikipedia. Search engines have been able to do that for years. What amazes us is when you can ask something with limited information, and still get the right answer when you wouldn't expect it. Unfortunately we're most amazed by the biggest guesses, "hallucinations", which happen to be true, so in our own brains we give it trust points more when it's the most reckless, and this will likely lead us to believe it when it gives us other answers that are similarly "hallucinated" when we don't have the knowledge to confirm its accuracy.
I've seen this happen in real time, when playing board games with some friends. Someone asked chatgpt about some technicalities of some rules in monopoly, which they thought were fairly obscure, and it answered "correctly", so in very human terms, they trusted it as "reliable". Then later they asked about a rule that we were disagreeing about in another game we were playing, and it gave a convincing answer (though it was not anything we could confirm from the rulebook, or any official errata from the game designer), so everyone went along with it, probably seeing it as even more trustworthy, regardless of the fact that we couldn't confirm its answer. Then, even later, playing a different game, two players were disagreeing on their interpretations of another rule, so asked chatgpt, and were ready to accept its judgment, and continue playing, except that I had _just_ seen that exact rule pointed out specifically in the manual when I was thumbing through it a minute earlier, and chatgpt was simply giving a "believable" answer, even though it was totally incorrect.
I think there's a broken incentive structure in the training, (and meta-training, from how we respond to it), where reckless answers that happen to be correct are valued more than obviously correct ones, so we end up trusting it more on things that are impossible to confirm and it's less likely to be factually correct about.
youtube
AI Moral Status
2025-10-31T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdXf7QoFmDGGOyNfN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSjIu2Vl2S4XsDv854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxZukTmMl-JceLYTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9XpETftOZ7TaCXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwaW0zpxwYp_RN1up54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNHO1SiatOYKKW7IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTolRgYrK8D5WL3bN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYKo1CIjC9FJ_d8jR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyhnt8LvpTm4dkAqqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzpvr7yPMYvQ1Pjdyd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}
]