Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mmkay i didnt hear any screaming AI in the video, but it was a good video. Pleas…
ytc_Ugwre70aI…
G
The world has become so heavy with one sided opinions leaving people disliking e…
ytr_Ugw8zc_AP…
G
So the guy makes his money creating AI, knowing what he was doing, then makes mo…
ytc_Ugz4T3leK…
G
Pick your poison: A Waymo Car or A Deranged Maniac as your Uber driver that had …
ytc_UgxjHKSQo…
G
UBI is not the answer. Universal OWNERSHIP of AI will be the only answer. T…
ytc_UgzbatUoa…
G
Very well put Sal!!! I have worked with AI research before but stopped. But now …
ytc_UgxLEGw-a…
G
AI takes all of our portable water, you can’t help people if you destroy the nat…
ytc_UgzoSrYNj…
G
I'm using words like "self-awareness" without actually understanding what they m…
ytc_UgyfRN3gL…
Comment
On hallucination, retrieval-augmented generation (RAG) pipelines are used which add processing steps before generation to incorporate relevant context/information into the LLM's response. This is most effective in narrow, domain-specific chatbots; its pretty easy to set up a RAG pipeline to search through a given companies' terms of use and similar documents (if pre-processed, much harder if not) to create a customer service chatbot. It's more challenging to use this in generalized models, as the requisite knowledge base is far larger, but RAG is why LLMs frequently attach references to their claims now. There are risks of incorporating erroneous information into the underlying knowledge base and retrieving irrelevant information, but RAG remains an effective method of managing hallucination.
youtube
AI Moral Status
2025-11-16T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBYclTZsCOKjvuPMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdZZ-9k8WO7zZFk7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxHbnWtfzT3TDCbt94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNfQ8k0NyXzuuIlcF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkUKT3mae-6MZR7ld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgylOyQKmdf4sW81H-Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFZJz845gFM8fe5-B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgziZ9q5yLVJqu2ds714AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyyiATN_IfHVFxsn514AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNrBxdB_swATDSVZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]