Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hallucinations isa result of the training, not the models themselves. Today they reward right answers, but don't punish wrong ones. This gives incentives to just guess if the model don't know. At the same time, hallucinations are at the core of LLM's creativity, so getting 100% ridd of it, might make them less usefull in a lot of ways
youtube AI Responsibility 2025-09-30T22:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyv9Xz4hdgYJlgrymJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybPUpUsgTGhCTgeqh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxRhM41A8DaWi8KdR54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrBEy8UNEaNmtgcrV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVPd0v2fM_XejOz0F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwPa6tSLz05f7AfDI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwnEIP2W57cmmNRXA54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy40CVUdgivstnqOaV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzuq-S3BwfVxnB_3tJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-2lJQHEp9VXyyXwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"} ]