Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it quite frightening that people defer to computer programmers on the question of whether AGI is potentially dangerous. “On 29 December 1934, Albert Einstein was quoted in the Pittsburgh Post-Gazette as saying, “There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean that the atom would have to be shattered at will.” “ Read more: https://www.newscientist.com/article/dn13556-10-impossibilities-conquered-by-science/#ixzz6UgkFDUHG Einstein was completely wrong and he did not even have the strong economic incentives to be wrong that AI researchers do. If you asked Henry Ford whether all of these cars might cause climate problems one day, would be even be motivated to listen carefully to your argument about the risks? Philosophers should be the last people to just defer to engineers on a question like this where the survival of humanity is arguably at risk. Engineers do NOT have a good track record of predicting the risks of the technologies they work on and AI researchers in particular have a very poor track record of predicting the rate of improvement of their own field. They were blindsided by the efficacy of neural networks for vision tasks and then blindsided again by AlphaGo. And on the other hand they hand made grandiose promises about self-driving cars that have not come to fruition. Nobody knows how far we are from the key breakthrough. It could be a year, if could be a century. To demonstrate that machines are not “really” on a path to intelligence you will need to define intelligence.
reddit AI Moral Status 1597035982.0 ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kykw5yc","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_kyltinv","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_g0y7v05","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_g10p5cs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_g0ys5vt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]