Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Because we don't have a definition of intelligence to start with, making any prediction about what will happen to the human race once "super intelligence" emerges lacks intellectual rigor and should be left to sci-fi novelists. Hinton' fails to provide proper evidence supporting his dark predictions and how and why exactly a super intelligence would necessarily develop malevolent intentions. By simply projecting very human traits on a non-defined super intelligence, Hinton forgets that these human traits are the result of millennia of a brutal selection process our species is the result of. What exactly would lead an AI to develop those traits without having been itself produced by a selective process? You could as well say that a super intelligence, having been developed by humans, will be smarter than humans but still keep the traits we like in dogs, ie, very attentive to our needs and quite incapable to purposefully make plans for our demise. Sounds silly? Maybe, but as much as predicting that super intelligence could become a predator, ie, a living organism that is the result of natural selection which made it that effective at hunting, killing and eating its prey.
youtube AI Governance 2025-06-19T00:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzot--byus4g_FlGPl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz369caXih0GkNaUCV4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxk5Fe-cJzjEvW4hXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyARcdKGYXuy9q_MDN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmLOcvgOwJ9Aff9L94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0xtJzPRdz-vFTFZJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxGLLQyUr0n-ekRy9t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxieZZc2JhF7b40lpB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHN8XbVJfHClHEMK14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxogfysGBErTSKCDaV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]