Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
OpenAI has a new paper, analysis on Hallucinations. Basically argue that the training method are using encourage random guessing. Because in benchmark if you submit empty response is same as wrong answer. But there are probability that empty answer can be correct. And rate of hallucination of fact baked in while training will not be lower than twice of single stated fact in the training set. LLM is the closest approach to AGI, since language is media of logical thinking and Attention can be turning prefect (with in context windows limitation other wise external memory is needed). The biggest issue now LLM is more a static learning though training, rather than continues learning while inference due to it might cause instability. That is one of future break though we can made. And online learning of that much of parameter is very expensive. Future might be smaller LLM maybe? (LLM's Large is compare to traditional skip gram LM, like our desktop computer formally named as micro computer same as laptop)
youtube AI Responsibility 2025-09-30T16:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx8PKn5TabyFIDN2614AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzR_Sa6NP4qlvfSYUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-3YucJB_koDnhS1t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynaQx6Wb0UlK3tibR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwefNrtHiQHTzJHKJF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy4Q87kujDiluP6YNx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1aO1noBQAo0CqChR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJesK19az14OxiN1l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZok8TKkrIbAnHNgx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxnk7yvhJz86RcCkIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})