Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1) This is not the end-all be-all video on the ethicality of AI. It's one side to an argument, which you should consider alongside other sides to the argument. 2) The points made here are valid, as the tech referenced is either available today or feasible within the coming years. 3) I would respect opinions of Hawking and Musk. Given their previous cross-field contributions, I imagine they have a strong ability to predict big-picture consequences. The "technical statistics PhD AI master-sage-wizard" you are hoping will weigh in, might not be able to predict all the possible ramifications of AI. 4) Yes AI can help people. But bad actors always persist. Given the stakeholders affected by them, we can assume the United States, JP Morgan, Yahoo, etc. have some of the strongest cybersecurity available. If they can and have been breached, it's reasonable to say that there is significant risk of bad actors doing similar with Autonomous Weapons (which, in this situation, would have bullets- not just credit card numbers). Which means we should be apprehensive about how we build.
youtube 2018-12-08T00:1… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyrplhsBT3nFWgCI9B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyksSvqXMK_yGMUxGh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzm7evec27OcKA8eOB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPFsAYP-e5Vuw8fBV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz70OAVGz0ggTxQuZt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7jIJbwjjOYNVRtf94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw6_kKDHyaWdkLVhVt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxOvlAcjFraYeFgGad4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxbcmSie5k5VI-jOWB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzp8zSaWb45ssGAl994AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]