Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I left being a technology teacher because kids only wanted to experience AI. I w…
ytc_Ugxg2KE00…
G
The mod clearly doesn't understand that ai can do any art style and is not just …
ytc_UgzOa0AZY…
G
He overhypes LLMs, and an LLM expert is not THE godfather of AI. The connectioni…
ytc_UgzLIboZ2…
G
We have to win the ai war. Unfortunately, we do not have the power grid or nucle…
ytc_Ugw2qBLAO…
G
Is creativity the idea not the execution though? True there are astonishingly ba…
ytr_UgyWZ26h7…
G
@Sreevanii Well, the video didn't really make a good case in saying that the no…
ytr_Ugxx4jmg0…
G
Ai is mans arrogance thinking he can put a gun in his mouth pull the trigger and…
ytc_UgzifAUaH…
G
They should use ai and robots to make new life out the space like mars cuz they …
ytc_UgxzvlQnw…
Comment
>LLM hallucinations are the events in which ML models, particularly large language models (LLMs) like GPT-3 or GPT-4, produce outputs that are coherent and grammatically correct but factually incorrect or nonsensical. “Hallucinations” in this context means the generation of false or misleading information. These hallucinations can occur due to various factors, such as limitations in training data, biases in the model, or the inherent complexity of language.
https://www.iguazio.com/glossary/llm-hallucination/
reddit
AI Jobs
1730723250.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lvitt9i","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_lvyyrmq","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"rdc_lv8olp3","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_lv9a12j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_lvc52u4","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]