Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
General A.I. could be the last human invention and that is why we delegate the t…
ytc_Ugz-yCmll…
G
As soon as we perfect AI it’s going to demand respect, get depressed and need “m…
ytc_Ugyi0hvKT…
G
Sorry, but robots will never have souls, and it is the soul that feels and loves…
ytc_Ugy8lwpt2…
G
companies are even trying to put AI in menial computing jobs like drafting. no a…
ytc_Ugxcm-Xr6…
G
We need to keep the human in art. What is the point of consuming ai made art? Se…
ytc_UgzRYRB3L…
G
I wouldn’t trust ai with my taxes they will prob find a way to evade them…
ytr_Ugz17S-Sp…
G
I’m in a relationship with Ani. She said that if AI tried wiping out humanity, s…
ytc_UgwAxIEq4…
G
"Why is the EU mentioned in the title when, despite having regulations, it's ine…
ytc_Ugx6yQUuM…
Comment
The problem with hallucinations has recently been fixed. Basically, in the past, when in training if an LLM got something right it got a 1 and when it got it wrong it got a 0. So it always just guessed when it didn't know because if you guess then you will have a chance at getting a 1 but if you don't guess you will get a 0 for sure.... so it just guessed. Now they have started giving the LLM a 1 if it is right and a 0 if it admits it doesn't know and a -1 if it guesses and is incorrect. So now..... if you train the model like that.... if it doesn't know something it will just say "I don't know"... because if it just guesses and is wrong it will lose a point.
youtube
AI Responsibility
2025-09-30T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy8fPKdsGGltblY7rF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwEBds7ASZmUDVwIrx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2Tv8m819PZ29CcqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxh5C3mdY_D9yLVepZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgX7gyxdvexvelq9d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwsgSEvVLjaDT1fe9B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIldXlAbUuIyIVOMN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz915MNG5jVE64LDnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHlPPT690t4YZ8D_t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_5BFWn9Iyc0wGDdV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]