Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ajdndbdjbdj
You know, you make me think that the AI was lying to me on purpose…
ytr_UgzUiL7sZ…
G
I'm all for Ai, humans in the majority spend there time killing eachother, hordi…
ytc_UgyIdw6Dk…
G
Me personally I would want hair if I were a robot, like I might not be human but…
ytc_Ugyd_2h4q…
G
So the vast majority of ai users are indian and india is run off a caste system.…
ytc_UgyPAbGTh…
G
The AI was too smart for that dumb-ass human. What kind of idiot casually cross…
ytr_UgwXnmfhF…
G
The title is "chairperson of the African Union".
And yes you are right it is S…
rdc_ibetczw
G
YouTube recommended me this just after my mom sent me an AI generated news video…
ytc_Ugy7Jl80i…
G
If anyone seen black mirror…. It’s like this is where most of this is going. But…
ytc_UgxMdfX0M…
Comment
I liked the inclusion of the challenge to The Chinese Room, but I was a little bit disappointed that it wasn't followed by a discussion of semanticity, even though it was mentioned. The system of The Chinese Room contains a book recording a sufficient number of details about how to construct a conversation and maybe even how to generate novel responses that it can fool any human into believing that it is human, and that's the principle under which chatbots operate, in a nutshell. But those chatbots don't verifiably have semantics, or knowledge of the meaning associated with its responses, and neither does the room system. If a perfect chatbot is possible, then what we have is either a philosophical zombie or an emergent strong AI (a strong AI which appeared by accident, incidentally, or perhaps antithetically to its design.
Modeling semantics is a hard job in science, partially because we can't pull apart a brain and observe "aha yes this is semantics and we can clearly see the algorithm that represents it". A promising idea is statistical correlation between representations of concepts, developed through experience (learning, pretty much). Could you say for certain that a rules-based chatbot does not have semantics, somehow emergently represented, if it can perfectly fool a human? Rather, the problem of The Chinese Room (I think the phrase "real understanding" is used here) is if we can actually know whether a system possesses emergent semantics. In science, we've made a lot of progress in explicitly representing semantics, but it's still an open question whether emergent representation of semantics in an explicitly non-semantic system designed to superficially emulate semantics is even possible, or if there's even such a thing as a perfect chatbot. One perhaps more rigourous test is to have an explicit model we know is right, and test the robot for roughly equivalent outcomes on various tests of semanticity against it (although that's essentially a generalisation from the basic conceit of the Turing Test). Another method might be to evaluate the robot algorithmically and see if we can generate a higher-order algorithm using it which performs roughly the same tasks as our known algorithm, although it is potentially more limited in scope.
I'd obviously disagree that robots can't think. A Turing machine can do anything mathematical, so saying that robots can't think implies that the mind is non-mathematical, which even if magic were real is just not possible, since math is just a symbolic system for representing any sort of rule. Whether a system of synthetic rules for mimicking conversation that is explicitly not symantic can represent semantics emergently, though, is a more interesting question. How we'd measure it is a more interesting question still, although thus far no such system has appeared and it may never. Personally, I don't believe in true philosophical zombies, and I do think that a perfect chatbot would necessarily have real semantics, although a very good chatbot doesn't need to. This is philosophy, but it's neat to consider we could identify that a chatbot could be very good without being perfect.
More broadly, such emergent semanticity is the question of whether weak AI can make the leap into strong AI by accident, and I don't really believe in that either, but it's also interesting to imagine how we could divide weak algorithms from strong, and how much of a continuum there is between them.
And in here at the very bottom I'd like to include a minor complaint about the distinction between strong and weak AI. The actual distinction as I understand it is that weak AI is task-specific and doesn't possess any discernable ability to understand, whereas strong AI is very capable of generating meaning and acting on it. From that perspective, a strong AI could be emulating a rat or be an unfathomably great superintelligence just as easily as it could be trying to be human.
youtube
2016-08-09T01:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Uggeu_dL2yyGR3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiIBZ-cQU9HDHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiV2FgtcXmuBngCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggzYa8S3hn_p3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggl5ij_czn1Y3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgizldNKvQmfYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghZuYnwWCnE53gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghqISwFTBtRP3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghudDn8bG56WHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiEeEdmu4MF33gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"}]