Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think "hallucination" is a hugely unhelpful term because it implies that something different is happening when an LLM produces false information. I don't think there's a fundamental difference in the process that produces a legal letter with real case law in it versus one with fake caselaw. It's not like in first case that it's looking through its databanks and finding relevant information it decides to include, and in the second case it goes "oh no I don't know what to put here, I guess I'll just make something up". In both cases it's just producing something that mathematically looks like the data its trained on. The first case probably is just more similar to things its seen before so the references it makes are more likely to match real references, but it doesn't know the difference, cus it doesn't _know_.
youtube AI Moral Status 2025-10-31T22:0… ♥ 440
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw5sbGMK4VZYu0Qq6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqjVRXqawJbMoy66Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy8Grygdpea24993Ll4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGgflIEMK7xL7NeBp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwdRVdZLcBeX6Ti2Q54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkdOulE4Oh_I0KEU14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaOKvxNgrjvVBS_lN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxYt3dR3yqexSBezMt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwTZUF8Pt3egLV8L894AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyE1VLNZsEgA0AJiGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]