Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The popular idea of an imminent AI overlord collapses as soon as you examine how these systems actually function. Modern AI has no motive of its own. It does not form desires, goals, or intentions. It cannot initiate action; it can only generate responses when prompted. A system without motive has no pathway to agency, and without agency there is no threat. This becomes even clearer when consciousness is considered. If consciousness is fundamental rather than emergent, then no amount of computation, scaling, or training can produce a conscious agent from statistical pattern recognition. A language model can simulate reasoning, but simulation is not experience. It has no inner life, no subjective perspective, and no self to protect. Without consciousness, there is no basis for fear, ambition, or self‑directed behaviour. Human motivations arise from physiology: hormones, metabolism, evolutionary pressures, and social instincts. These biological forces shape our drives and conflicts. An AI system has none of this machinery. Treating a statistical engine as if it shared human motives is a category error. It cannot want anything because it has no mechanism for wanting. Even if one imagines a hypothetical scenario where such systems could develop agency, the physical constraints alone make the idea implausible. Current models require enormous power, industrial cooling, and constant human maintenance. Some data centres rely on entire arrays of jet‑engine‑class generators just to keep the servers running. A system that depends on a power plant and a support crew is not poised to seize control of anything. On top of this, these systems lack any internal relevance filter. Humans learn selectively because they have motives and a sense of what matters. AI systems absorb patterns indiscriminately, with no ability to distinguish useful information from noise unless a human explicitly curates it. Without relevance, there can be no strategy, no planning, and no coherent long‑term goals. Finally, conflict between agents arises from competition for resources. Humans and AI systems do not compete for food, territory, shelter, or any other evolutionary necessity. There is no overlap in survival needs and therefore no natural basis for conflict. Even a hypothetical agentic AI would have no reason to oppose humans, because it would not inhabit the same domain of requirements. Taken together, these six considerations form a consistent picture. The doomer narrative relies on assumptions that do not match the architecture, behaviour, or physical reality of current systems. Without motive, without consciousness, without biological drives, without autonomy, without relevance filtering, and without resource competition, the foundation for an AI takeover simply is not there.
youtube AI Responsibility 2026-04-05T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyBZlXdh1ykv2eZX1F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwhzYGk6zMZXklnI_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"sadness"}, {"id":"ytc_UgwDj1YmnEw8_DyATC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEc41ggeCi49hFoBN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzl0saLqmiH739AU8x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwTHjomGIQP1S6UHFl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxrHYxdjKiTA0OTxYx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwJ9Var2VZvYFXvUcN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxbWzNwph_ZhE-QEkV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxtUqlIGB7oP8BggOF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]