Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just about any technology is "dual-use," i.e., it can be used to benefit humanity and/or the biosphere, or misused in ways that are harmful. The more powerful a technology is, the greater its promise _and_ peril. E.g., nuclear energy can be used to provide electricity (among other beneficial uses), or end human civilization. AI, especially AGI or ASI (artificial superintelligence) will be the most powerful technology humans have ever created, assuming it works as promised. Thus, it makes sense for anybody (Leftist or otherwise) to embrace its potential while being concerned about its dangers. Furthermore, the Left-Right political axis does not address views on technology. That would be better addressed with a second axis (call it Forward and Back) that defines whether someone is in favor of continuing technological advancement, or prefers a more "low-tech" suite of technologies. Alternatively, one can believe that Progress (or Regress) is inevitable. WRT FALC, it would be interesting to see what Objectivists think of it, assuming that it's possible (i.e. no resource limits or other barriers prevent it from happening). Since Objectivism holds that productivity (work) is central to a moral person's purpose in life, the idea that robots/AI could do all the work and leave humans able to live as a leisure class in a crystal-spires-and-togas utopia would seem to be a threat to the Objectivist world-view.
youtube AI Governance 2023-11-03T13:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzVkG5uJpGAo7k55cR4AaABAg.AO-IIiAc7ijAOBSPt63BUd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzfhQLxWTd76bF7DM94AaABAg.AO-E9Mc_OufAOJAVQ8th1R","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwyU7UUDHw-evBqLdF4AaABAg.9pkHyhvViUt9weNpnl8GjI","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwLjQMToeAQJp1chpd4AaABAg.9pcfjlIF7i-9qBu9gEVFID","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwwXxqANGCiGxn50_J4AaABAg.9q9U9P6dArq9qBP4vmXw0l","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwwXxqANGCiGxn50_J4AaABAg.9q9U9P6dArq9qBxP-BzWo-","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugys0ie_-2JuC7fz7Yp4AaABAg.9q92fuwfyZz9q9PBiH6UNn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugys0ie_-2JuC7fz7Yp4AaABAg.9q92fuwfyZz9q9vdxahAXi","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzYd7VS5CmK3W2puu14AaABAg.9q91x35nhTA9q9PVqYiU7g","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugys9OPYOr4et21BvtF4AaABAg.9q91l2f52wt9qB3bAffg2w","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]