Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with this hypothesis is that currently almost all of the AI spending…
ytc_UgzIFFI_C…
G
Yeah but if you limit a AI so much it is bound to make idiotic mistakes 😂…
ytc_Ugw_p6SIs…
G
very nice channel, but not cool the fake AI part , a good reminder that we can't…
ytc_UgzEQmLWO…
G
I use ChatGPT for solo roleplaying. I designed a simple ruleset I fed it and sta…
rdc_mrt4j2i
G
I'm waiting for the day for AI to take control over humanity and over nuke.... …
ytc_UgwVG267b…
G
Sorry, but I have to point out the moment when Hinton is describing the good sce…
ytc_UgydLCp8y…
G
One thing is to use A.I as a tool and the other to use it as the whole factory.…
ytc_UgzBgisYl…
G
We just learned that AI is going to replace teacher and lawyer jobs pretty soon …
ytc_Ugx8HBC50…
Comment
OpenAI has a new paper, analysis on Hallucinations. Basically argue that the training method are using encourage random guessing. Because in benchmark if you submit empty response is same as wrong answer. But there are probability that empty answer can be correct. And rate of hallucination of fact baked in while training will not be lower than twice of single stated fact in the training set.
LLM is the closest approach to AGI, since language is media of logical thinking and Attention can be turning prefect (with in context windows limitation other wise external memory is needed). The biggest issue now LLM is more a static learning though training, rather than continues learning while inference due to it might cause instability. That is one of future break though we can made. And online learning of that much of parameter is very expensive. Future might be smaller LLM maybe? (LLM's Large is compare to traditional skip gram LM, like our desktop computer formally named as micro computer same as laptop)
youtube
AI Responsibility
2025-09-30T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx8PKn5TabyFIDN2614AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzR_Sa6NP4qlvfSYUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-3YucJB_koDnhS1t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgynaQx6Wb0UlK3tibR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwefNrtHiQHTzJHKJF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy4Q87kujDiluP6YNx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1aO1noBQAo0CqChR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJesK19az14OxiN1l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZok8TKkrIbAnHNgx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxnk7yvhJz86RcCkIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})