Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Best comment I have seen today. My favorite is the arsonist analogy. Yeah, Hinton chose his path with at least an inkling of what could happen. Just the harshest awesome sauce....bravo....may as well laugh when the bbq is cookin".....Hinton's Apology Tour...Love it....Not really...He is being dishonest too.... because as far back as 1863 Samuel Butler talked about machines being our masters someday. A guy this smart who came up with great scientists and mathematicians who also said machines would take over just like Samuel Butler did in 1863.Feynman ..Turing...they are Gods in the computer world...I guess he never read sci fi either? Even old sci fi plays had bad robots in them. 27 years before he was born....So that excuse doesn't really play. This next part is an excerpt from his grandbaby AI talkin'..... The idea that machines could potentially "take over" or pose a threat to humanity has been explored in various contexts throughout history, both in fiction and by notable figures in science and technology. 1. Early Science Fiction and Literature: Samuel Butler's Erewhon (1872): This satirical novel warned against the potential dangers of advanced machines, with the protagonist fearing that machines would eventually dominate humanity. Alan Turing later referenced this novel in 1951, stating that machines might "take control" as mentioned in Erewhon. Karel Capek's R.U.R. (1920): This play introduced the term "robot" and depicted artificial beings created for labor who eventually revolt against their human creators. Isaac Asimov's robotics stories (1938-1942): While not explicitly about machines taking over, Asimov's Three Laws of Robotics were developed to prevent robots from harming humans, reflecting underlying concerns about machine autonomy and control. 2. Modern Technological Warnings: Stephen Hawking: The physicist expressed serious concerns about AI potentially surpassing human control and posing an existential threat. In 2014, Hawking stated that success in creating AI could be the biggest event in human history, but possibly the last, unless risks are addressed. Bill Gates: The Microsoft founder has also voiced concerns about the development of AI, particularly its potential to become uncontrollable. Elon Musk: The founder of SpaceX and Tesla has frequently raised alarms about the dangers of unchecked AI development, warning of potential negative outcomes for humanity. Nick Bostrom: A philosopher and leading voice in AI safety, Bostrom joined Hawking and others in signing an open letter in 2015 highlighting the importance of AI research focused on ensuring AI systems are "robust and beneficial". Historically, the fear of automation leading to job displacement has also contributed to anxieties about machines impacting human society. So he had no idea I guess?
youtube AI Governance 2025-06-18T02:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzljMaRZtl_nzzLg-94AaABAg.AJU9EdLyIhYAJUb-ZtlkBQ","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgzwMfmICXY-r7if8Ih4AaABAg.AJU5GabWq5hAJVA3eHEQdw","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugy99dODp5tYC6t5xoJ4AaABAg.AJU4YeEYaSDAJUO42LdV95","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzt_zwzW4q2BCwTrSt4AaABAg.AJU3wdYJLpyAJUPysjXYqi","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzagvXcShbKi8YHp854AaABAg.AJU3pvo_YMxAJURIAVQBVQ","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugw49oJvaw93OwYIB1Z4AaABAg.AJU-i3yiprbAJVaiShOgQ5","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzIaVQ6TlycH48H-ud4AaABAg.AJTzH_bbG7bAJVXQjrzdl9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzpYTAD57y547QxFnt4AaABAg.AJTvbJeUwpOAJUkIMG2KDB","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugyo51WVRb5I5A-7uiV4AaABAg.AJTlk60SI4VAJUenJz0WMv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyqT72OZ8u8pIIVjdR4AaABAg.AJTd5eHH17HAJTgU-RLHV9","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]