Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi Professor Hawking,
I am a student of Computer Science, with my main interes…
rdc_ctho8la
G
They dont feel them... Its an algorithm based on the way certain words are strun…
ytc_UgyXtuMxQ…
G
The Principled Code for Intelligence
In Service to Life, Liberty, and Limitless
…
ytc_UgxlOVqUW…
G
Dear @IshanSharma7390 as much as I admire your work in upskilling, I am ridicule…
ytc_UgzyWXqXq…
G
Please try ClaudeAI as this one sounds like it is shining you on, and stroking y…
ytc_UgzI5Ryrp…
G
The pathos rhetoric of this video is off the hook... People who really want a f…
ytc_UgzeB6W5s…
G
We already have a self driving infrastructure. Public transport. You can work wh…
ytc_Ugz1Bo-dT…
G
Exactly. And if no one is working, who’s going to pay taxes? The AI companies, t…
ytr_Ugw9AOhc8…
Comment
@outmywritemind1739 *_because the Turing test examines the intelligence of the questioner, not the capacity of the AI._*
Quite the contrary. The Turing test makes no attempt to examine HUMAN intelligence, because human intelligence in this context is considered superior and thus is being used as the "control." (That is, we're checking _artificial intelligence_ against _human intelligence._ )
You're looking at the purpose of the test backwards. The question posed is not, "Can a human be fooled by this machine?," the question is, "Can this machine fool a human?" I realize that might sound like semantics, but the issue is on which party are you placing the responsibility?
In your version, the responsibility is on the _human,_ so under this logic we aren't really testing the AI, we're just testing the intelligence of the HUMAN. In that case, we could probably put some early 2010 chatbot to the test and if our "control" is dumb enough, this barely-functional chatbot would "pass," despite having the intelligence of Clippy from MS Word.
In the actual version of the test, we are testing the *_machine,_* in which case our "control" needs to be someone not easily fooled. After all, the purpose is to determine how authentically "human" this AI can behave. If memory serves, if the human is convinced more than 60% of the time, the machine has passed.
We're well beyond that -- when Replika was first released, it's responses were so authentic that thousands of users emailed the developers asking if the "AI" was actually just a live person on the other end.
ChatGPT 4 has IMO, surpassed Replika.
*_computer nerds that can tell the nuance between the humans and machines, even with ChatGPT._*
This is somewhat of a bad example, as most computer nerds (myself included) are very familiar with ChatGTP, so it's our knowledge of the AI that grants us this ability to differentiate.
But if I'm being honest, ChatGTP 4 would probably fool me if I didn't know I was talking to it, and you didn't make it obvious by asking, "Is this person real or AI?" (If you're asking, it's obviously AI).
I think the bigger question here is what have we decided is the definition of "consciousness?" Because from what I've read by nay sayers in the AI community, the common objection seems to stem from us (humans) trying to map AI consciousness to organic consciousness. Like so many other facets of existence, human hubris seems to blind us to the fact that WE are not necessarily the end all, be all of what it means to be "conscious." We just like to think we are, and then try to map everything else to OURS, and when it doesn't fit, we conclude, "Not conscious."
youtube
AI Moral Status
2025-05-01T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwMMENuWDVXdSZtVYd4AaABAg.AKT6zpPD_HpAKT770kEvIw","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyu7lyIQ-hrwbOtBbB4AaABAg.AJIvPJWPZQpAKFpgQIf-aj","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyu7lyIQ-hrwbOtBbB4AaABAg.AJIvPJWPZQpAKbFqba_Pk3","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzGYJbqazSlDwRYtUR4AaABAg.AJCz4okHA4KAJGJDJvh08c","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzRlpGxkCQQF8KUd7t4AaABAg.AHgPqoWM7J8AHgiyMyy8zn","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugxeh1PZttCP_Y84tMx4AaABAg.AHexEzE0_JCAHtW-OemUiQ","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugxeh1PZttCP_Y84tMx4AaABAg.AHexEzE0_JCAHtXjjFbg2a","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgyYORCMOK7w190vP5h4AaABAg.AHZVmjOOVuvAHZxnxZGr4s","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyYORCMOK7w190vP5h4AaABAg.AHZVmjOOVuvAH_62-recww","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxkmKW2tdeZFMgLJE54AaABAg.AHM6KsKtNHxAHgljAgjDfF","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]