Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Holy Grail of AI research is the Artificial General AI or AGI. This is not necessarily a rational economic goal, and may lead humanity on the wild goose chase, and below is why. In addition, the AI helps to "falsify" the existing reality and to dilute or completely eliminate all human knowledge stored in digital form. PROBLEM ==================== I disagree with the main premise that AGI is necessarily something that will make labor more productive and accurate, and that its benefits outweigh negatives based on the following: 1) Only tools with predictable, repeatable functionality can make labor productive (since otherwise the operators must struggle to control them, negating benefits gained by their use); general-purpose intelligence is unpredictable by design; 2) Tools that make labor productive must be cost-effective; AGI will incrementally consume more energy to process and store large volumes of irrelevant information due to the nature of LLMs; 3) Tools employed for labor must not become a bottle-neck or a single point of failure in any sphere of human activity; in AGI's case, all human knowledge can be lost, modified or skewed by accidents or malicious actors; 4) Tools employed for labor must have defined verifiability, accountability and be ethical; in AGI's case none of this would hold (e.g. AGI cannot be a defendant in an industrial accident legal case, it has no empathy, its yield volumes and speed cannot be matched by humans, etc.); 5) Economic productivity is not a self-goal -- in human socioeconomic order producers must necessarily be consumers; AGI cannot be a consumer. 6) All innovations and tools that result in economic growth necessarily require more skill and knowledge from the respective operators -- not less (e.g. automobile requires deeper understanding of mechanics, electronics, traffic rules, safety, etc. as opposed to making and driving a slow horse and a carriage); the AGI obviously would require less skill to operate it, since by definition it is autonomous, self-sufficient, and has the right-of-way in all disputes due to its alleged superior knowledge and intelligence. Thus, AGI, as all other tools and innovations that reduce the demand for skill and knowledge will result in economic contraction due to loss of labor opportunities (leaving only a small niche of highly skilled AI specialists and unskilled labor for manual tasks and services). CONCLUSION ==================== The economic system based on AGI is going to be highly stratified and can only serve a very small in size human community, and will essentially, usher in an era of Technocratic Neo-Feudalism. SOLUTION ==================== a) Stop the development of AGI, and focus on specialized applications of AI tools with limited autonomy, leaving human actors in charge of making ethical and other crucial decisions in their respective areas of application. b) Decentralize human knowledge stores and AI computing centers to prevent data monopolization. c) Discourage excessive consumption of electrical power to store vast amounts of general information -- to make the economy more efficient and to prevent misuse of the collected data. d) Outlaw any autonomous AGI actors, and put legally liable all parties operating AI-based tools, systems and machinery.
youtube AI Responsibility 2023-11-29T18:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzQt0PdMLsKqxNgl6F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypS0DmFThmreoUbO54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSMzLp-5Mm2DBTdxt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwGD6ldKwWQEi5LcEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuQh_UlF0L6NT6Ewl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCs93uolixM8DVaip4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_EprDa5GiII1Eo3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGWf1ADRZYhD4gkcp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzfLg7Y2H-bU3G8rDd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxQSqltewlphmIvRHl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]