Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that it's time to rig the system and go back in time and have everything…
ytc_UgzWRWNNG…
G
If I’m not thinking well about the words I want to say than a lot of times in th…
ytc_Ugwa30CPL…
G
This is such a good video. I like it. I would be so satisfied watching gen AI su…
ytc_UgykWsyJ6…
G
@TheC00lestkidd_1 imagined if ai told them that lol, never heard of that in my …
ytr_UgzlgCckB…
G
🗣️:No, 10 + 2 is 15
Chatgpt: yeah sure,if it helps you sleep at night ☺️…
ytc_UgzQC5us_…
G
I have no doubt A.I. will turn our world will turn into a form of Skynet. And wh…
ytc_UgyHzjUHE…
G
Okay, but in terms of academic writing, even as a writer myself, I think people …
ytc_Ugz5ruS6L…
G
Just gotta point out... "AI" datacenters aren't just (probably IMO) a spectacula…
ytc_UgzFByOSl…
Comment
When he tries to explain why AI will be able to become more intelligent than humans, at around the 17 minute mark, the one issue he fails to address is how the AI determines why it should add weight to a particular connection.
When it comes to games such as chess or go, there are very specific goals that can be outlined beforehand, but when it comes to the idea of intelligence and what it constitutes, it is such an amorphous construct that it cannot be pinned down with specific goals and, for me, that is why this prediction does not hold up under scrutiny. Because AI will have no clue as to why a particular weight is an improvement or not.
youtube
Cross-Cultural
2025-10-05T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzsXzCJ-hQevBsogCV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzTy_7iMzKjVKW2F94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyISTWcLH1cprOvkTh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJIOvXpYW7HXQrs0t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugysv0EHiNj7t5pAc1B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1KtUjaoFOJbPMX3V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzROdwRhXJeO7lGHrd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzfL3v2p30uelcy2gt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmB0K1yUyOjJ6tjN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAxMd52utLSx6k-hF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]