Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder whether the outcome might have been different if the rider was wearing …
ytc_UgwjPWSD-…
G
Aucun "riche" n'apporte de plu value, ce sont les travailleurs qui font la riche…
ytr_UgyQ8oJ2u…
G
@alexxx4434Exactly. I have seen AI slop and I have also seen people using AI to…
ytr_Ugyrn8Uim…
G
AI is a tool that will be fully be utilized by those doctors. You still need the…
ytr_UgyGOZCSU…
G
A puzzle piece missing:
Buyers are necessary to keep businesses running.
Witho…
ytc_UgxZGF0m8…
G
It's genuinely sad to see how much hatred the common folk has towards AI when ob…
ytc_UgyOc2t6d…
G
@klulu-kun Making good AI art IS a process, it takes me 12 hours on average to p…
ytr_UgxHKHf0U…
G
From what I understand AI isn't artificial intelligence... That is it's not Inte…
ytc_UgysENc2e…
Comment
I get what you're saying, but AI doesn't lie. I've tested various platforms extensively, and they all scored 100% on factual information, even when I tried to trick them by asking them what year the Great Fire of Atlantis happened. If you understood the technology, you'd know that AIs' training data biases them towards certainty rather than saying "I don't know" (they were trained on humans' data, and humans are the same way). You can improve accuracy by asking AIs to insert uncertainty tags, do provenance tagging, or list confidence intervals. The things AIs generally hallucinate about are experiential things, like what they did for fun last week. If you ask them well-known facts, they have an extensive training data set, like sets so big that it takes weeks to train the models and costs millions of dollars, and they're extremely accurate (probably much more than a human teacher, in fact). In one experiment I ran, I collected around 1,500 pages of data, and there were maybe 5 hallucinations, all related to experiential things, not real-world factual knowledge.
youtube
2025-11-01T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugxgf0-oseDU1TyR2iJ4AaABAg.AOq0pzkK00mAOq8SEBIFfa","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_Ugw7wASvhCg9yEFd9vB4AaABAg.AOq-B6Emv73AOq1a1FC09Q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw7wASvhCg9yEFd9vB4AaABAg.AOq-B6Emv73AOqlPO2Qdz_","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz6t_pB5UVIWszIXl94AaABAg.AOq-8BGZp0mAOq0ecEWUW-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwiIrjzZOIlQB-zeYF4AaABAg.AOq-0PyPnVSAOq14AoImCX","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxjcKWGOt0N8_9Zosx4AaABAg.AOpeqsmgtnyAOpjIyiQttg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyoS0yvN2lq2Bwxe314AaABAg.AOtY9E4PlEnAOuNd31OSW0","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgyoS0yvN2lq2Bwxe314AaABAg.AOtY9E4PlEnAOyB44YVh9t","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxPb_YEPvjczbCkmZ94AaABAg.AOtGihUwNU2AOyBWwKfRUW","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyLLbI3BUkF0kzA8Gl4AaABAg.AOt5y3rWbRMAOyD7xbLZ40","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]