Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wow on her phone while driving wow I'm a truck driver semi truck driver and ev…
ytc_UgzkEmmBd…
G
Clearview. But beyond that? Total mystery we don't get to know anything about ho…
rdc_ghf7eoc
G
Well, I don't have all the degrees that these guys do but I'm an 81-year-old mal…
ytc_Ugzt2zQDu…
G
the fact that there is an AI generated summary of this video and i got AI ads du…
ytc_Ugwt5r2i4…
G
@gman9999-e9gyou HAVE to practice it . theres plenty of free resources that can …
ytr_UgzxJU_rQ…
G
Good info thanks. When Uber first arrived it was advertised as "Ride Share". …
ytc_Ugz3Pucox…
G
We control AI using singularity machine to every one food,shelter,cloth and love…
ytc_UgzaML1OX…
G
That chatbot was probably the only thing in this boys life that kept him going f…
ytc_Ugxa_0EDB…
Comment
> People have the library of Alexandria at their fingertips
An LLM is not a database and contains no facts. It's a network of tokens and probabilities that can spit out things that often, but certainly not always, align with reality, but it's not any kind of verified reference like an encyclopedia or scholarly work.
It's like asking a friend about something. Maybe they're remembering right, maybe they misheard someone, maybe they're relaying a rumor they heard, and maybe they're misremembering or hallucinating. Unlike a human though, an LLM doesn't have any type of declarative (factual) memory. Every output involves rolling the dice, even for the exact same question.
Treating any LLM like an oracle of fact is just begging to be mislead.
reddit
AI Harm Incident
1772722205.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_o8qw4qh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_o8s8q8a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o8qt9ix","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_o8rkrkx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_o8sd8b6","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}]