Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you for the interesting episode and the discussion on the personhood of "sufficiently strong" AI agents. I would like to share my thoughts on this topic, even though I realize it may be an oversimplification of the matter. If we represent a "sufficiently strong" AI agent using a minimalistic Venn diagram, it lies at the intersection of two circles: agency and intelligence. Currently, intelligence is provided by a "sufficiently strong" LLM, while agency is provided by a "sufficiently strong" architecture. Projecting into the future, LLMs will become "stronger"—perhaps real-time training will be solved—while agency architectures will undergo standardization and categorization. Our perception of personhood is heavily biased by anthropomorphism, so other perspectives must be entertained. For instance, Alex mentioned that "strong" LLMs are "societies" in the sense that they are trained on the bulk of human knowledge, yet this society is condensed into a single entity (one per giga-data-center). Combining these ideas, we see a future of centralized intelligence and decentralized agency. In this model, agents are spawned to address a specific problem or task, likely by a long-existing "strong" agent. These task-driven agents are reminiscent of specialists or specialized tools; within an industrial worldview, even a human specialist is a "tool" of society. Therefore, we may see a limited number of "strong" AI agents and a myriad of sub-agents running on borrowed intelligence. Person-to-person, person-to-society, socity-to-person, society-to-society relationships are common, and this would be no different. Admittedly, this perspective does not yet address robots with enough edge compute to run a "sufficiently strong" LLM locally.
youtube 2026-02-06T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwWshhJS3yXhEjFiod4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3lcx_9js9SYJwN6V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugygq2LHYgKuoobPVz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyUxGKAMb6W7Xg_Y5R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzkwJES4phA-l492Nh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx59c796HUTJdn0dQR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEB8TSTUUOx-JzIyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8hazA2CrU-G9bwB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzgaq3AGa_O6hxHlCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxP8WNjhWLwWYlJott4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]