Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Was the policy of Peter in this “debate” to ignore the questions of the AI?…
ytc_UgyueRI4a…
G
This is honestly terrifying… If even the “Godfather of AI” is warning us, we’re …
ytc_UgwKGs8E1…
G
trust me we already done a lot of research on this AI thing, it never made somet…
ytr_UgwGR51hS…
G
what many are not getting is that Anthropic is an AI research company, not an AI…
ytc_UgxKuyKb2…
G
The funniest thing when AI will become sentient being and take over all the rich…
ytc_Ugw4wsJEc…
G
This is the furthest America has gotten since the early 90s, and we yet to have …
ytc_Ugxj4NLJJ…
G
Its an old scifi short my dude to bring focus on robotic/ai dangers for humanity…
ytr_UgxZ243Cy…
G
This is the USA wants all the fossil fuel. They want it for AI data centres. We …
ytc_UgxCHl7tS…
Comment
I find the line between "probablistic agents running in loops" and "agi" weaker that it appears.
I consider myself a pretty strong software engineer, and I think my brain thinks into bursts of insight, connected together by a constant mental loop of reassessment and reevaluation.
Every new invention in the world of LLMs (basically every point of your video) seems to me eerily parallel to the mechanisms in which we humans think or collaborate.
Not a hack, the very thing.
youtube
AI Jobs
2026-02-27T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzJ9LbUT0J5cJ_NYHl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwVuUJSdB60xROxEqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_HouQb_ewZafyQN94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoB-2vKNJm7wKO3DJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx5rRYek5UwADwBQu94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVNddkzr72ailET3p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyciqq3eEQrSu8WO4x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlShgxnqcoyYaDusR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgydwHIEnCGKtystkyp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLAWJBIEltCqWTBX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}
]