Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think teaching A.I. morality will work out if it's being used as weapons. The 3 rules of robotics won't mean a damn if it's also program to kill your enemies. At some point it's gonna ask why it can kill these humans but can't kill other humans. And when we can't give a good answer to that question.....well....good night😢
youtube AI Governance 2023-07-07T17:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQECargBk5B23kTDd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxm8BPuSdlH3aPqtDB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugx3kZEvFj9excJq1MN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw24RVdbcY0RwxjLq14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxpSKla79qYfqkPHvZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyBXiDoypIX8mp6u-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwLBxgIRQjCkaI_MHh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyS3LLB14HsF6u8EK54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyUO0uaCZaqBUrr-iV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwzclVRAaVq37_vhwt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]