Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Isn’t this really about a binary AI wishing to separate divide and conquer us? A…
ytc_UgxuD1ov0…
G
it was 2019 autopilot (6 years ago!), worthless 60 minutes bs. What was the guy …
ytc_UgwlpRQ6L…
G
Oh look, MAGA Trumpscab celebrates you losing your job - and wants you to think …
ytr_UgxE0OiY3…
G
If you really listen to Hinton, you start to understand why he didn't see the is…
ytc_UgwZYrzaW…
G
Chatgpt told me I was a strawberry. I asked it to use an analogy on how to deal…
ytc_Ugw4nkvZE…
G
"they're trying to steal what i have rightfully stolen!"
we desperately need to…
ytc_UgxeM6_Rz…
G
I literally saw the Dalai Llama suck a kid's tongue. It wasn't a kiss, it was a …
ytc_UgzXQNn40…
G
The benefit of it in the medical field is the quick response time. When doctors …
ytc_UgzHVd_lm…
Comment
This is outdated and inaccurate in some important ways. Hallucinations are not caused by AI not understanding what it is saying. An LLM is a token prediction mechanism. It has no capacity to "understand" anything. Hallucinations are caused by variances in the batch size (ammount of data processed at one time) the next token is predicted with and temperature settings (the probability range the next token generated will be). The major issue that is being highlighted here is an LLM's number one weakness: non-determinism. This means that it is impossible to debug any one bug and then implement that fix for similar bugs in the conventional maner. By using a fixed batch size and a 0 temperature you can create a purely deterministic LLM as shown by Thinking Machines in their blog post on September 10th of this year. This will result in the vast majority of the issues cited here being solved because it all boils down to one core issue. Previously you could not effectively debug a LLM and now you can.
youtube
AI Responsibility
2025-10-03T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynfEijUvzZe0ZqF3V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw7CQLpJ1FPVqf_d_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9jSxtu37K-mdjEZd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzI76bty-Vihfexy8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSFCw_0ZNBCr5KYqJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxCnkHlQ0JnxyYWBgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx8gCQANMHqGcsVoi94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfF1_xlEH-8xrHFjZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx74C8wtpt97sedo8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsOlEqqzcYytpaEDV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]