Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're right, finding a number to represent the probability of either of these outcomes would be complete speculation. Comparing them to each other, however, is possible. If we approach it as a limit problem there is a clear winner. Given infinite time and infinite compute, and given aging as a problem with technical solutions, the problem of aging will eventually be solved. On the other hand, AI destroying humanity is a scenario and not a problem with potential technical solutions. It is one of many potential scenarios, none of which are guaranteed to occur, even with infinite time and compute. It's not clear that slowing down development in AI, even if it were possible, would decrease the probability of the doomsday AI scenario. It would, however, decrease the likelihood that advances in AI help to solve the problem (and the existential threat) of aging within our lifetimes.
youtube AI Governance 2023-07-17T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sG0KIycBlk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz-I_5z2MH1F-xN_bt4AaABAg.9ra7EGbrpwC9s-nH-OgPNs","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugz-I_5z2MH1F-xN_bt4AaABAg.9ra7EGbrpwC9tOj5W4zn70","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzUXE2d9iCAiRPKfyN4AaABAg.9rYaYPPO-7MA72SGK_7Aqz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugy8fQDWMBP-0LRsOAB4AaABAg.9rTdNN4aeVh9rY9ZyGIH8X","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugy8fQDWMBP-0LRsOAB4AaABAg.9rTdNN4aeVh9ra3e-kPDb_","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"}, {"id":"ytr_Ugz8-e_-RYQnkl5h3MN4AaABAg.9rT0qF094RK9rTtn13CD3K","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxcuDNaybYEsp5vnLZ4AaABAg.9rT-uSfjN9d9rTLwqFhvnA","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwH-6hm87UtoueFPWt4AaABAg.9rSv5z5xe2O9rWNAl3PZiG","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwH-6hm87UtoueFPWt4AaABAg.9rSv5z5xe2O9rsZ4ItlwZY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]