Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Existential risk posed by AI is not from AI as the dominator of humanity, but of AI becoming the tool by which a few humans or 1 human gains the predominant control over the security and decision-making of all others. A real existential threat currently arises as we cede control to a few or a single corporation. Not yet fully realized but starkly illustrated for us if we focus on recent events where a few people WHO are presently attempting to seize control for themselves over the security of every human life on earth; by seeking the right to use any force necessary to inject any or all humans at their discretion with chemical or biological materials of their choice. Such injected substances can have undetermined/delayed effects including the death of any or all humans who receive them, voluntarily or by other means. The people seeking the power to forcefully inject others have preemptively sought and been granted immunity from bearing the responsibility for any negative consequences/ diseases including the deaths of any number of other people. Although this danger signal is able to be seen by everyone, very few have stopped to recognize its importance.
youtube AI Governance 2023-07-05T10:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugxs32GfFAuVJqXtsER4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyP32EFA3Y5ktq3NCR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxpm-nkEA4Jlj1DWUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3O-lecstqLqiaL5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5LT0M-B6vvyirP9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywSt7QVnzDLLJwsnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzR72iHwgV5RJqN_6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx1ltpClDN2cUZQHmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwB-pGM8x1G4L7K-sB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugymf1lykKqLfaW0dVN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]