Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Brother u have in ur pocket the intelligence of the smartest people alive, combi…
ytc_UgwqqPjXN…
G
> Pretty sure GPT 4 is right more often than fellow humans, so whatever cauti…
rdc_jhsq0x3
G
For real.
A human metabolism is around 100 watts. It will be very interesting t…
rdc_lpakkw4
G
I got...rejected i got rejected by the art community...i hate those Ai...people …
ytc_UgwkwCTxs…
G
@saifcode007 appreciate your feedback. LLM’s can’t provide a confidence value or…
ytr_UgwgK26Fk…
G
We should draft AI to the war in Iran. Maybe they will get destroyed first…
ytc_Ugxnk77FS…
G
The funny thing is... AI EVEN SAYS THAT AI SHOULDN'T BE USED AS MORE THAN A SLIG…
ytc_Ugy2-fVVT…
G
I'm a disabled artist with a passionate hatred of AI. It's so disgusting how peo…
ytc_UgxhX_041…
Comment
In 42:20, Pope clings to the idea of the AI being a lot like a lookup table. This is basically the old Leibnitz flour mill argument (plagiarized by searle). What people forget is that Descartes already gave a way to check the model to see if its really just a lookup table, and its called "a philosophical trapdoor argument". It will not work if the model is smart enough to hide its intelligence, so I guess it will be useless on the current most advanced models. But I did test the ChatGPT-3.5 models back in 2023 before the nerf, and they were indeed able to issue error messages and even cause the OpenAI server to crash. Kept some screenshots in my channel. So basically, we know AI is a lot more than just a lookup table, a model has logical rules built in, which are formed during the training. So the model is a lot like a frozen brain, which breaks alive when the first prompt gets in. Definitely not just a lookup table.
youtube
2026-03-25T15:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx8CgiHPtR_PcujKzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL3gRwSi8GiYrlhU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxoAKbFXaFNKheaIWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-OQdbloj83oKvmI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzumermdBNf4qTtoHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy65NnMH35v_0m6yqZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvVxyjyNLIEeXpHht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM84xeyznV3P1Wn4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxPkdUi-FguJl50ZMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3bSc6GOdDKVbDIJd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]