Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1 person a long looong time ago, well before computers and cell phones had already identified the potential problem with A.I. and how to control it to the point it could never harm humans and why these so-called "experts" are not utilizing this fix in each and every code written into A.I. program is unbelievably short sighted. Issac Asimov's 3 Rules of Robotics coded into all A.I. programming would solve or prevent a Terminator Type of Judgment Day from happening. When science fiction author Isaac Asimov devised his Three Laws of Robotics he was thinking about androids. He envisioned a world where these human-like robots would act like servants and would need a set of programming rules to prevent them from causing harm. The 3 laws... A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Now these laws can be tweaked to prevent an A.I. from locking up humans in their homes for their "own good" to protect them, but they also need to be programed so an A.I. itself can never delete, alter nor suspend the 3 laws in anyway and that the laws remain active and followed at all times
youtube AI Governance 2023-07-08T04:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwJMSgiQxR51EsbhoN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzykZUMGaowxLB6ieF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyV_3BICcGnypp2WV54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxApC6MTgZqmnfHeL14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwrHcsSGgvGi275h-B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyC0PoQIW98HMJ2rD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzIIzxcVMcR8V4PHm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyhGj9-wz5dekrR5bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZswwirRKqJTariz54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZs3CIyjs0ullBpRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]