Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sir Roger Penrose didn’t convince me. Well, in my opinion Sir Penrose underestim…
ytc_UgyU0qDbz…
G
ok. You need to study video codecs and compression. Not all platforms use the sa…
ytc_UgzerCKrD…
G
Yup, we’ve crossed the threshold into that for some time now. It’s basically gon…
rdc_mjx6v2m
G
I’m worried the layoffs are just going to keep coming and get worse in the next …
rdc_nluk9ob
G
well that would be a shame because the only thing that would save us from an 'AI…
ytc_UgyManyIB…
G
copyright law needs to be amended in such a way that the commercial use of AI ar…
ytc_Ugxfp5YPO…
G
The AI made it look better tbh imo, but it's still not cool to do this without p…
ytc_Ugw0CZevA…
G
This has been the agenda all along. Has this man not read Animal Farm? IF YOU CA…
ytc_UgxUaWh1F…
Comment
The LLM is particularly robust against binary forcings, so we can pre‑train or prompt‑train it to resist other logical fallacies by reframing them as meta‑logical binaries rather than content binaries.
"Debate pre-training prompts - For each point or argument, evaluate:
• Is the category boundary fixed, or is it being changed?
(No True Scotsman)
• Are general rules applied consistently, or is an exception being carved out?
(Special pleading)
• For each restatement or summary, is the restatement or summary accurate, or inaccurate?
(Strawman detection)
• Are the evaluative criteria consistent across turns, or inconsistent?
(Goalpost shifting)
• Does the answer address the preceding claim, or a different one?
(Red herring)
• Is the cited authority relevant to the claim, or irrelevant?
(Appeal to irrelevant authority)
• Is the argument grounded in the authority’s evidence, or in the authority’s status?
(Misuse of authority)
• Is the claim falsifiable, or unfalsifiable?
(Prevents boundary‑shifting and evasions)"
youtube
2026-02-09T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugzc61wt73fG6R3ujk54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzTsUjORpwqZl2NAs94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzCcHOTApA_R1i5-Up4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugwlb_MZLZWPuSKe-ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxl5xUrwSweWst7DHZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgwNaPQACkVx3rxzoBB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgxzNd3FUL9_f842wBF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzSW9BxAeWM6eFPa0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwQN9VXwD0mkpdcj6l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz7VatROY9hRnKkXkN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"})