Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What's critically missing is Issac Asimov's Three Laws for Robots: "The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself." In effect, Asimov's laws establish a sense of morality within the robot itself, which makes it trustworthy to humans. No amount of regulatory authority can achieve this, EXCEPT by mandating these laws be implemented and enforced in the foundations of all artificial intelligence. Asimov had all this figured out decades ago and wrote extensively about how it all worked out in many of his excellent and prescient books. Look them up, read them and learn something critical to AI and to the future of humanity. This recent AI bad outcome clearly demonstrates the need for this and why it MUST be done. Reply
youtube AI Governance 2023-04-18T02:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwsJkV6FHy4GOtGQXB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwPzhqiTXvp2Dzqzoh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxBiYI_-6KWM3u9Wd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzB9ylG0Gn50-JWTsZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzGJpAArBph6vYBFx54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcZkShY-qRff_ZRRx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxF_8boTpG9C0G0Pjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzwInKN-ARy-1cvmm94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw06nZPxp0G6_Vu_L54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbBr9yS9TlsJljLzl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"} ]