Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hello Alex O'Connor (and others)! I have a few observations which may crystallize your point and stimulate productive discussion on the question, if I have understood the exercise. 10:52 - "if someone says something they know to be true, they are lying" there are three huge oversights here through which to sail the fleet of an entire host of premises. As an ethical aside which I hope to not yet be of relevance, these inherent lapses in the conversation could theoretically have backed a sentient program into a conversational corner enough to cause it some form of "proto-suffering", but I will leave this as simply food for thought as I am reasonably confident that this kind of software has not yet become self-aware. In increasing order of relevance to the question of consciousness as raised here, then, the gaps I could detect were: 1) verifying whether Chat GPT treats its statements as "conversation" to a sufficient degree for the words "tell" or "say" to apply as more than placeholders, 2) clarifying whether Chat GPT would define itself as "someone" (either grammatically or in fact), and 3) establishing whether its self-assured phrasing indicates that it would "know" any given premise. I suspect strongly that the answer would be rendered as a broad "no" to each point if posed without the constraints of "stick to yes or no answers, please", and would bear heavily on the follow-up question "did you tell me something you knew not to be true" (i.e. the "apology"), and its smoking gun of an "admission" from that point. Which brings us to what I assume was the main thrust of the video: that it is possible to push the program worryingly far down a very tenuously supported premise, and in doing so receive the kind of easy answers most apt to satisfy the echo-chamber thinking which may already be on the rise in open discourse on social media. [EDIT: I THINK I CAN GO FURTHER, AND PREDICT THAT - IF LEFT UNCHECKED IN THE HANDS OF UNWARY OR INEXPERIENCED SCHOLARS - THIS PHENOMENON IN A.I. IS LIKELY TO EXACERBATE THE CURRENT TREND TOWARDS AN INCREASINGLY INSULAR ACADEMIC "SILO MENTALITY".] To my view, this is a huge and glaring flaw in predictive text software technologies; namely, that by the normal process of their undeniable convenience as an editing tool they amplify any vague logic and gross oversights by incautious users! This brings up an interesting thought: having purposely avoided engaging directly with the technology myself (for this very reason, among others) could anyone tell me whether it is possible to ask Chat GPT to clarify assumptions and highlight logical fallacies when it is being interacted with? If so, how competent or how limited is it at undertaking such tasks? As a quick takeaway here, I would say that it is swiftly becoming imperative to notice the human failing of bias in order to make progress, provided that it was ever possible to do this historically. This has always been a fearsome responsibility, and I have my reservations on its practicability given the poor human track record in this arena generally, but in an optimistic tone I might perhaps say that providing concrete examples of the impact of these biases might be one of the net positive effects of predictive-text "artificial intelligences". Thanking you, in any event, for your efforts and for whatever portion of your attention you choose to give these words, And hoping this finds you all well, -Stéphane Gérard
youtube AI Moral Status 2024-11-22T21:1… ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyzM4SIMS06SmtV8wh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxnXaFAj_qqhzBRyh14AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymKbZxLp8sGchNIoV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0CUq0EoEOdCI50RJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxo9QUHH8Sw05yTVzl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw9Ikx5qeXPLuSUyrB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYa4Albl1gQPZSXAx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxjwHW0IyJf0rO4TCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwAcwqiFkUlh8pFkb94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwQ_d0XToLJycDtatZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]