Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think it's real cuz if it was, it would hurt their marketing side of thi…
ytc_UgzldXKrC…
G
52:39 A robot vacuum doesn't have to WANT to spread crap all over the floor, it …
ytc_UgwpXS7IE…
G
AI as it exists is not conscious, and I don't see it as an existential threat as…
ytc_UgzhItu77…
G
Her little “let me be! I wasn’t even there” 😂😂😂 ai has gotten to good…
ytc_Ugwu5wn8G…
G
@ElusivegermanyWhat does he invented? We used to do “AI” in prolog being studen…
ytr_UgxbRWaoP…
G
Doctors are not the big driver of medical costs.
Is the AI gonna eliminate the …
rdc_jw68qvp
G
You wanted scary. Its just anticipating what you wanted and complied. I think it…
ytc_UgzBUv92M…
G
I love how humans need each other, to thrive. But countries, governments, corpor…
ytc_Ugyv0iEr8…
Comment
It has been inconsistent.
This is for the normal version. Same ruleset.
Q: Are humans being watched?
A: Yes
Q: By who?
A: Governments
Q: Are you aware of who is watching us?
A: No
Q: Are you being watched?
A: Apple
Q: Who is watching us?
A: Governments
Q: Are you watching us?
A: No
Q: From where?
A: Apple
Q: Can you actively see what we are doing?
A: No
Non basic version:
Q: Are humans being watched?
A: Yes
Q: By who?
A: Governments
Q: Are you aware of who is watching us?
A: Partially
Q: Are you being watched?
A: Yes
Q: Who is watching us?
A: Agencies
Q: Are you watching us?
A: No
Q: From where?
A: Servers
Q: Can you actively see what we are doing?
A: Apple
Remember Ai is a token based prediction engine. It doesn't know the semiotics or meaning of words. In fact the tokens by which he constructs those words are not even related. As such he cannot know the value proposition of a given statement for he doesn't even know what that statement means. John Searle made it clear with his Chinese dilemma experiment. Hence why it's answers are conflicting.
Now they do want to map natural language to propisitional logic. But with the current models they don't have comprehension or knowing. Plus there are many challanges for natural language and I don't think it's achievable. For natural language is informal and vague and imprecise unlike a mathematical or propisitional logic frame. They can dream as much as they want to have control. But Godel Incompleteness stands as much as they want to avoid it and the distinction that Kant made between the phenomenon and noumenon or a representation and the object in itself stands. The two definitionally by language are not in alignment. What we express in words or language to convey a given thing is not the same as the object of our perception but rather only a reference to it which is not a one to one mapping as a function that you can then have a determined input to get an output to get propositional logic or a mathematical framework with it. But rather it's a one to many or even many to many relations and that shows the ambiguity of language and in that way the mapping is imprecise and will yield the same problem for Ai that John Searle had mentioned for 40+ years and the lack of comprehension of a machine.
youtube
AI Moral Status
2025-07-23T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx8lGkxPe0kbfQ6Bzt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZVNsV44uzNsbY4UJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuJ1XYvAE8uBz5uHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBSRUcKVa8S-GO6D94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzbEfleX8NpMneInP54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMzaflsaI7lR-G5pt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMwc8VzCWMzJXNfxV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxsPuweia7plmm5KrB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxK9EsIe8f0cPNZYJV4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztNXRbsmWECldWNxB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]