Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😂yo if you beat a human what makes you think your gonna beat a robot…
ytc_Ugy4pxVT1…
G
It's still a piece of code and running on the servers. The humans have all the p…
ytc_UgzVyPy-D…
G
NO choice? People dont understand that the higher dimension entities can use …
ytc_UgwUw-7CR…
G
Trolly on an eternal loop sounds like earth orbiting the sun... Sounds like AI a…
ytc_UgzSngscP…
G
at least an ai robot delivers the food and wont take a pic of you naked in your …
ytc_UgzHse0iR…
G
@MyNamesH75 I get your point, but we’re not talking about “you” or “me,” we’r…
ytr_UgyIiSbyc…
G
Education is about learning something new not school. The arrival of AI has poin…
ytc_UgyWfhqt7…
G
Guys, I just printed "Hello world!" in Python and my screen said "Hello World!" …
ytc_Ugxh8wyVJ…
Comment
The point about why LLMs hallucinate and don't say "I don't know" at 16:50 is a bit off; it's right, but in a misleading way.
The bulk of LLM training is rooted in predicting what word will appear next. If more people wrote "I don't know" online, the LLM would learn to to say "I don't know" more often. The misleading part is that it wouldn't say the phrase because it actually doesn't know, it would say the phrase because it is mimicking what it has seen. The real problem is the fact that LLMs learn mainly from next token prediction, not by interacting with the real world. When a human says I don't know, it is because they have learned from interactions with other humans that saying "I don't know" when they actually don't know is usually a better thing for them to do. As LLMs don't learn in this way, we shouldn't expect them to be able to say when they don't know something.
youtube
AI Moral Status
2025-10-30T22:0…
♥ 34
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwiNphKFW9X1-QaJ-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRjAa1xY9Z5cAgqhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHJqxEZwW92ojEIM54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKEyRf9Efg1gtDGVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzsz86Dgtuqvi6ELtx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwqp2A-ZgRV4MaerRt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwJPHWUcnvJotZFqnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2h4n1cyMj8mxDYGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAJ2kAfyBWrCvGR6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgymcRj0Dpo-ThynfKx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]