Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The current LLM architecture will never "fix" hallucinations. In fact, hallucinations are mathematically inevitable with the algorithm. Prominent AI researchers such as Yann LeCun have been warning about the limitations of LLMs for the past few years.
youtube AI Responsibility 2025-10-01T16:1… ♥ 669
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy728lai_s-_haZGkl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzjsq0XF3yQkwm7IlZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQAGs2OCdHV3HSrVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxGAGFS2Cj2Vb_Q9gN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwr180V77KcBGPn-Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxEgO7f-FGKELDydjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWgK5BP0Dp68KA4i94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyMJCBmsGsyzq1BxXF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXGX4HOjUPkV_Lb994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-oMPXx-yV9HqoMrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]