Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i dont see an intelligence that is predicted to harm or dispose of humans as More Intelligent. The AIs so far are trained in documents written by mostly Humanists, the anti-human authors didnt make it to the press often. Even if some like Mein Kampf are in the training data there's 1000 authors denouncing that book. So an Ai left to its own devices based on current sets of training documents wont go evil. But the unforeseen is unforeseen. Now the same DNN trained by evil writings will be evil, but that's the human choices, and the "good AIs" may be the only ones capable to defend us against the bad ones, so dumbing down is possibly not a good idea. I noticed that AI "experts" are regularly un-promising that the super AGI is near and will be able to think, and invent. This makes me think there are forces like the Pentagon who want to freeze AI developments in case China for example could use them to use them to gain supremacy in cyberwars or to strategize old school wars or become the top industrial force, invent better/cheaper Domestic Robots for ex.
youtube AI Governance 2025-06-23T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzKoLv-PzAm-LhV8ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxp_U1q07iztPHcr6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWH4ietbUL3-tPdr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_HSTyv6MB8755cot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-NJ61zfFBcEpRWhV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwCDEWaCDp0nwGnJHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxzl-hiOiJlUG7zbk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzkNhlU9uhlBJ95xd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyngWy6jd1UnwCXstx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzuPrOorFSI5DwYgRZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"} ]