Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting, but I believe Prof. Hinton's scenario will never become true, because if any power —USA, China, or Russia or xyz— develops super intelligent AI first, their initial priority will be to devise a plan to eliminate the other two rivals and secure global dominance. This will fire a devastating conflict between humans: World War III
youtube AI Governance 2025-06-23T07:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz2pt1-_dOmLLzHIm14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFXQBQaT4M540fIXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz2U48IuUOH7TgWlc94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyeXBMWnysQNyI5JHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyfeIq2QarftsZUsdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgygbhS_55pMPeSlSTF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7rVrG35GReVTNB1V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwsUyhSKjXoI0oN37R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwbiR026KIh9Syz9yl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxw2_akOv8rCqDeRCd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]