Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even with self-driving trucks, they would still need a driver to do a pretrip se…
ytc_UgzD3K2nW…
G
Can AI save the planet? Can AI reduce the threats to planetary health? Probably …
ytc_Ugzt1Y3ZA…
G
I feel like trying to goon with Ai is more trouble than it’s worth. Sure, you ca…
rdc_mhwxa9t
G
In the near future we'll have products that'll will say "Ai Free" and itll be a …
ytc_UgySyVr0J…
G
The discussion on AI vulnerabilities hits home! We rely on Pneumatic Workflow to…
ytc_UgyVBQyAj…
G
generative ai's only deeper meaning as "art" is that of the dystopian future we …
ytc_Ugz1nzQaT…
G
@smokymcbongwater1088 my head isnt in Hollywood it's in understanding that AI do…
ytr_UgxeEqy56…
G
Our Best chance of making AI safe, is making AI have Zero ability to get to data…
ytc_Ugz_uPPQG…
Comment
You're right, finding a number to represent the probability of either of these outcomes would be complete speculation. Comparing them to each other, however, is possible. If we approach it as a limit problem there is a clear winner. Given infinite time and infinite compute, and given aging as a problem with technical solutions, the problem of aging will eventually be solved. On the other hand, AI destroying humanity is a scenario and not a problem with potential technical solutions. It is one of many potential scenarios, none of which are guaranteed to occur, even with infinite time and compute. It's not clear that slowing down development in AI, even if it were possible, would decrease the probability of the doomsday AI scenario. It would, however, decrease the likelihood that advances in AI help to solve the problem (and the existential threat) of aging within our lifetimes.
youtube
AI Governance
2023-07-17T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sG0KIycBlk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz-I_5z2MH1F-xN_bt4AaABAg.9ra7EGbrpwC9s-nH-OgPNs","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugz-I_5z2MH1F-xN_bt4AaABAg.9ra7EGbrpwC9tOj5W4zn70","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzUXE2d9iCAiRPKfyN4AaABAg.9rYaYPPO-7MA72SGK_7Aqz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy8fQDWMBP-0LRsOAB4AaABAg.9rTdNN4aeVh9rY9ZyGIH8X","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugy8fQDWMBP-0LRsOAB4AaABAg.9rTdNN4aeVh9ra3e-kPDb_","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytr_Ugz8-e_-RYQnkl5h3MN4AaABAg.9rT0qF094RK9rTtn13CD3K","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxcuDNaybYEsp5vnLZ4AaABAg.9rT-uSfjN9d9rTLwqFhvnA","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwH-6hm87UtoueFPWt4AaABAg.9rSv5z5xe2O9rWNAl3PZiG","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwH-6hm87UtoueFPWt4AaABAg.9rSv5z5xe2O9rsZ4ItlwZY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]