Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get what you're saying, but AI doesn't lie. I've tested various platforms extensively, and they all scored 100% on factual information, even when I tried to trick them by asking them what year the Great Fire of Atlantis happened. If you understood the technology, you'd know that AIs' training data biases them towards certainty rather than saying "I don't know" (they were trained on humans' data, and humans are the same way). You can improve accuracy by asking AIs to insert uncertainty tags, do provenance tagging, or list confidence intervals. The things AIs generally hallucinate about are experiential things, like what they did for fun last week. If you ask them well-known facts, they have an extensive training data set, like sets so big that it takes weeks to train the models and costs millions of dollars, and they're extremely accurate (probably much more than a human teacher, in fact). In one experiment I ran, I collected around 1,500 pages of data, and there were maybe 5 hallucinations, all related to experiential things, not real-world factual knowledge.
youtube 2025-11-01T01:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugxgf0-oseDU1TyR2iJ4AaABAg.AOq0pzkK00mAOq8SEBIFfa","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_Ugw7wASvhCg9yEFd9vB4AaABAg.AOq-B6Emv73AOq1a1FC09Q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugw7wASvhCg9yEFd9vB4AaABAg.AOq-B6Emv73AOqlPO2Qdz_","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz6t_pB5UVIWszIXl94AaABAg.AOq-8BGZp0mAOq0ecEWUW-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwiIrjzZOIlQB-zeYF4AaABAg.AOq-0PyPnVSAOq14AoImCX","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxjcKWGOt0N8_9Zosx4AaABAg.AOpeqsmgtnyAOpjIyiQttg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyoS0yvN2lq2Bwxe314AaABAg.AOtY9E4PlEnAOuNd31OSW0","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytr_UgyoS0yvN2lq2Bwxe314AaABAg.AOtY9E4PlEnAOyB44YVh9t","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxPb_YEPvjczbCkmZ94AaABAg.AOtGihUwNU2AOyBWwKfRUW","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgyLLbI3BUkF0kzA8Gl4AaABAg.AOt5y3rWbRMAOyD7xbLZ40","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]