Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon Musk stated that AI will be a greater threat to humanity than nuclear weapo…
ytc_UgxGVXvox…
G
Is it possible that at the highest levels ai is being used so frequently for fra…
ytc_Ugx_KTiz8…
G
This is beyond disturbing. The powerful few pushing driverless trucks aren’t jus…
ytc_UgxOwUONZ…
G
We appreciate your comment! While the interactions in this video are scripted to…
ytr_UgzCwzScn…
G
Man imagine if Disney from all shitty companies is the one to save us from AI 💀 …
ytc_UgzHJVeTQ…
G
I'm so grateful that God is in charge of all of this and that he loves us 😂. It …
ytc_UgzpI_I5i…
G
Impeccable video Dagogo. Especially that Attention Is All You Need reference. I …
ytc_UgzBnJ2XE…
G
@Sophiakun-e7t
We say that a painting or a statue "looks better" in two cases:…
ytr_Ugx5ax9SH…
Comment
LLMs are indeed limited. They can’t think like humans based on the basic way it works. Everything it does is based on statistical analysis of most likely responses based on training data. That’s not how human brains work at all, so you may have a thing that can regurgitate data way better than humans, but they have no internal judgment or ability to prioritize anything. It has no emotional intelligence or motivation. Even the massive leaps the past couple years haven’t led to any significant difference in output. We do tend to anthropomorphize anything that sort of looks a bit human, though, so it’s no wonder people are attributing human-like thought to it. It’s not, the reason we can’t predict the behaviors is because the systems are dealing with insane amounts of data (much of it wrong or from edgelords) and when you try to crunch that much random data in a statistical model you’re naturally going to get things you didn’t predict.
youtube
AI Governance
2026-03-19T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw7ARqnzkhlo-y5TuZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJU0OND3ifyhkCraN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyr5L3Y7upMoU8aLq14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxy3K-02v-jm7WWsk54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwSQegabWR1c_jvRX94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwNsyywDUuSwNDTJ8h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJTpeC3FJgyR6443N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwaV7LHWsMfyee_au54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8d6s-o9R92U6Mq3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyU0Btsah_0sgRGuhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]