Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dam didn’t know ai art was obliterating consent too jesus, have we evolved and p…
ytr_UgxLcAc-A…
G
"An autonomous drone needs to decide whether to risk the lives of busload of civ…
ytc_UggxCaNMy…
G
AI is coming for the computer jobs first, physical work and skilled work will be…
ytc_Ugx4855LD…
G
One thing artist can do which Ai users can’t is claim their art as their own and…
ytc_Ugy7-EQGg…
G
I made an AI racist b4 cuz i was talkin abt my black friend to ChatGPT and i tho…
ytc_UgwYmNCvv…
G
Based on what you said AI won't let junior developers in. Because this was what …
ytc_Ugz7Kt2T_…
G
V2V (vehicle to vehicle) transponders.
Basically, every vehicle on the road e…
ytr_UgzhuiJI9…
G
My KI needs 5 trys to sort 30 Numbers, i dont know where the intelligence is her…
ytc_Ugw5gq0TL…
Comment
Thank you for the interesting episode and the discussion on the personhood of "sufficiently strong" AI agents. I would like to share my thoughts on this topic, even though I realize it may be an oversimplification of the matter.
If we represent a "sufficiently strong" AI agent using a minimalistic Venn diagram, it lies at the intersection of two circles: agency and intelligence. Currently, intelligence is provided by a "sufficiently strong" LLM, while agency is provided by a "sufficiently strong" architecture. Projecting into the future, LLMs will become "stronger"—perhaps real-time training will be solved—while agency architectures will undergo standardization and categorization.
Our perception of personhood is heavily biased by anthropomorphism, so other perspectives must be entertained. For instance, Alex mentioned that "strong" LLMs are "societies" in the sense that they are trained on the bulk of human knowledge, yet this society is condensed into a single entity (one per giga-data-center).
Combining these ideas, we see a future of centralized intelligence and decentralized agency. In this model, agents are spawned to address a specific problem or task, likely by a long-existing "strong" agent. These task-driven agents are reminiscent of specialists or specialized tools; within an industrial worldview, even a human specialist is a "tool" of society. Therefore, we may see a limited number of "strong" AI agents and a myriad of sub-agents running on borrowed intelligence. Person-to-person, person-to-society, socity-to-person, society-to-society relationships are common, and this would be no different.
Admittedly, this perspective does not yet address robots with enough edge compute to run a "sufficiently strong" LLM locally.
youtube
2026-02-06T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwWshhJS3yXhEjFiod4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3lcx_9js9SYJwN6V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugygq2LHYgKuoobPVz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUxGKAMb6W7Xg_Y5R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzkwJES4phA-l492Nh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx59c796HUTJdn0dQR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEB8TSTUUOx-JzIyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8hazA2CrU-G9bwB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzgaq3AGa_O6hxHlCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxP8WNjhWLwWYlJott4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]