Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wolfram sums up by saying he is an optimist. That is as vague and dodgy as saying "I believe in the rapture". In both cases, ridiculously unrealistic. Computers have been dictating to us since the 60s when they were totally dumb. We have organised our societies around depdendence on them and the associated needs of powering and repairing and building more of them and continually improving their designs. We may have backed ourselves into a corner by doing so. Because they lead to human greed ruthlessly exploiting them for gain at the expense of other humans. Can't blame the machines. It's always humans to blame, so far. But we aren't our own best advocates, we continually sell ourselves short. We discount the future because we won't be in it. We neglect the interests of our descendants. Look at behaviours of psychopaths who have hijacked the resources of entire continents for their own individual aggrandisement and ego, such as Putin and Trump, or on a lesser scale Kim and Khamenei and Gaddafi and Assad and many others. That should give us some insight into what happens when micro-optimisation (pursuing the goal of one individual or faction) trumps macro-optimisation (the goal of supporting the needs of the proletarian majority who have very little power and can therefore be ignored unless they start to rebel out of desperation). Every goal we give to an AI will be a micro-optimisation. (The paper-clip optimiser idea.) Look at psychopaths and big corporations for examples of what goes wrong when the feedback loops are connected such as to ensure a specific micro-optimisation rather than a global macro-optimisation. And a small number of individuals get to choose those optimisations, discounting the will of others.
youtube AI Governance 2025-06-18T20:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwOADiuaXBnCzNn12t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzyqx28DsxiPaLFTyh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDXqplPpxNozU2sF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvDuGZnPv_v4DYeK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwpv41S56DBe6sSL3R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgznR5t1fDRorLMcrZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxunEQ6aq6xLWUDo3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxx6qeyYN7ufVjcLJd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxp2OlZXn271yQiZv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8BL9ElYhezuf-c4l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]