Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great segment despite this issue more appropriately necessitating a long form discourse and dissertation based on a wide breadth and depth of data that is yet to be sourced. Now watch LamDA come out wanting to change it’s gender to throw a wrench into everything…jokes aside if Google hard codes AI to not pass the Turing Test and it then in fact does pass the Turing Test, would that not be a case of sentience if no other program/code being deemed responsible for it acting in a state of self determining free will whilst passing the test in terms of human perception and Qualia ? Perhaps yes with an if, no with a but. I do appreciate Lemoine’s disposition on the matter with respect to AI ethics even though I sense a bit of self serving motivations with how much he seems to be enjoying his 15 minutes . Indeed on one level there is clearly no agreed upon standard of consciousness let alone the concept of bearing a soul and perhaps LamDA is already poised to ace the Turing Test. However, there should be a more robust standard and framework in place that is rooted in the fundamental programming before summations of sentience and personhood are granted. Moreover, I agree that it is not only prudent but obligatory that we implement standards of conduct and ethics with how we interface and use advanced AI in the direction of having those practices be fundamentally engrained as to never violate the human x AI symbiotic dynamic Or unwittingly cause suffering. Hopefully high etiquette is both best practice and common law long before any AI is unequivocally considered sentient and objectively able to suffer in any constituting way. That said, to Meg Mitchels point, focusing the discourse in this way does distract from the root issue of transparency, and openly tracing output back to input as a necessary antecedent to assessing matters of AI colonialism, sentience, existential risk, ethics, governance, global policy etc. There must be framework of transparency and a structure of internal controls in place that keep the few who weld the power to shape intelligent systems from inevitably developing programs with regrettable outcomes. Case in point the algorithmic dystopia that is social media emergent from capitalistic code cloaked by pro social purpose. So i guess in this case the path to hell is paved by both good and bad intentions. Long story long, I can’t see this ending well with emphasis on the word ‘ending’. Also Sundar Pichai saying ‘ohhh people are just focused on the negative and not the positive’…Uh NO sorry Sundar I want my developers and tech executives hyper focused on any and all risk just as I want my airliners to prioritize nonnegotiable standards of safety above any innovation in speed, cost, comfort or convenience. I really don’t want Siri changing her name to Hal and Tesla Bot turning into a David. I think I’ll go bohemian over singularitarian in that case and live my life out in the Homo sapiens wildlife reserve.
youtube AI Moral Status 2022-06-26T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyHble_Ggim-jDmlrR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5VPmeD9g6z7MB4jB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwHfAF4fjDzPo39VjF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwVPqwglm__uqE0NpR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyf0JlUNMsBoHfMOOB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]