Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Looks like we better get on the train or get left at the station .…
ytc_Ugw3F2ydT…
G
We struggle to get 80yr olds out of politics, it's paperwork. We can't turn AI o…
ytc_Ugw6w63LK…
G
AI technology will wash over the human race like a title wave. When it's done, t…
ytc_Ugy8C8P37…
G
Dont worry, we will all be rendered useless, including all the precious c-suite …
ytc_UgyaUucS5…
G
Look at this film. One day AI can be prompted and it will be able to make every …
ytc_Ugx7PvsqW…
G
@R4gingBull If you can't find something that captures your vision but can create…
ytr_Ugz-H135S…
G
😂😂😂Classic UK stance—resisting everything on principle. But let’s be honest: iso…
ytc_Ugy8OsHYS…
G
But the are 1000s of videos of black women actually doing this tho. Are u trying…
ytc_UgxQ8IYVD…
Comment
You touched on a valid point. The hallucination problem of LLMs come from being rewarded for answers that are similar to right answers. They might have 90% of the words correct but the answer is 100% wrong. There is some back-end work going in various directions, newer architectures, and better training I don't understand. I'm more familiar with front-end workarounds, increasingly long one-shot prompts, getting it to check answers and show work step by step, search integration, and tool use.
youtube
AI Moral Status
2026-01-25T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-WvKicIaeOqH3NrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb67oLlWURSZ5mLLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN6Y34g4qQUWhrgEZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWlBmesWeTaDRGa-t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTPDRmNdHdb0bwnmB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugye8OqqTc6UlBWPeip4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzfe0GExYy_1wD1D1x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwY-GXPB0CdL5BftsZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugwmr4AgKFk-6KmQdi14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwP95YfqoXwF4qq5Gt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]