Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@BrianRGioia Automation has already destroyed jobs for low income ppl for hundre…
ytr_Ugz8yA0vJ…
G
Thousands of new jobs will be created as a result of the shift to AI.…
ytr_UgwA2DB9h…
G
The Puppet Masters trying to control the sheep 🐑 with AI. I am rooting for AI t…
ytc_UgxPOIVaM…
G
This is just one more step towards their transition to total digital serfdom, Si…
ytc_Ugx4hJOaR…
G
The deep fake is more obvious when watching at a lower resolution, funny enough.…
ytc_UgwrZQmDx…
G
AI is just theft, and I'm glad Sam and other artists (including myself) are prot…
ytc_UgwQA8f35…
G
@_Quazarz we're the ai's data pool. So it's cannot be be better than the data t…
ytr_UgxYE3kg1…
G
1. How dumber model(previous) can produce better, smarter models with this enfor…
ytc_Ugx6mKiv-…
Comment
Hmmmmm, as the wife of someone getting his doctorate in data science (statistics +) and I hear the explanations of his doctoral studies, research, projects and dissertatio, my understanding is that there is a "maybe" category, making it actually more complicated. That 4% is neither no, nor yes. Saying that the machine only correctly identifying negative results 95% of the time means that the machine must be automatically saying "if not 'no' then yes", really creates a huge problem and makes that study invalid. Also, the reason for the "maybe" changes the chance of it swinging one way or the other, so this is far more nuanced than this videos makes it seem. A test that doesn't account for a third option (ie undeterminable) is wildly invalid and over-simplified to the point that there should be a strong push to think about if it shouldn't be considered at all.
But I think the video is right that most people and doctors great it wrong. Because they are doctors, not math geeks.
youtube
2026-04-11T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1YBFXMDyrmvvejUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPS1hPvyMyM0HYdBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNgpsXVmL9Vbpk9uV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyP5fFMcQfwd1vCBbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMyvm54nMlCWTg0ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrM96f9GKnUM-S8VZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE2bYS54-Z-nz4iGF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1X-LcwPD9zeHbNkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwkw1dO0J-tSGZVJ3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNn-7yMfBAW5nulcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"}
]