Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What with all the jobs in Sports? like Surf Instructors? No AI can ebner do it. …
ytc_UgyWyHuVw…
G
I feel like the more pertinent question is: are we making systems that are in so…
ytc_UgwCHzHKi…
G
Seems like the they would have the opposite problem with their ultra low fertili…
rdc_lj9tg3j
G
I don't like being that guy, but that's not the focus of the article, and it's n…
rdc_jifqn2r
G
Ok, I went back and read the Nightshade paper (Shan et al), and the authors know…
ytr_UgzIlMgjZ…
G
So that's like a 10 maybe 15 minute from original to digital but ai is 5 seconds…
ytc_UgyObHgLX…
G
If we really care about disabled artists, we should address how polluting AI dat…
ytc_Ugyryjf2n…
G
Yeah? The problem isn't using the AI itself, it is HOW it is used. Especially si…
ytr_Ugwl9rhCM…
Comment
I think "hallucination" is a hugely unhelpful term because it implies that something different is happening when an LLM produces false information. I don't think there's a fundamental difference in the process that produces a legal letter with real case law in it versus one with fake caselaw. It's not like in first case that it's looking through its databanks and finding relevant information it decides to include, and in the second case it goes "oh no I don't know what to put here, I guess I'll just make something up". In both cases it's just producing something that mathematically looks like the data its trained on. The first case probably is just more similar to things its seen before so the references it makes are more likely to match real references, but it doesn't know the difference, cus it doesn't _know_.
youtube
AI Moral Status
2025-10-31T22:0…
♥ 440
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw5sbGMK4VZYu0Qq6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqjVRXqawJbMoy66Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8Grygdpea24993Ll4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGgflIEMK7xL7NeBp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwdRVdZLcBeX6Ti2Q54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkdOulE4Oh_I0KEU14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaOKvxNgrjvVBS_lN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYt3dR3yqexSBezMt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTZUF8Pt3egLV8L894AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyE1VLNZsEgA0AJiGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]