Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I weirdly think a lot of those redditors and AI bros making those angry comments…
ytc_UgyiHn40E…
G
well historical data of the last 10 years shows discrimination against white men…
ytc_Ugzu8E4kp…
G
AI is not something to collaborate with, it's not intelligent. Ask it yourself, …
ytr_Ugx8_EkyD…
G
It aint gone last...just wait. Then robots aint supoosed to be there...they just…
ytc_Ugx18_Jkk…
G
And cumulative emissions over time (e.g., since the start of the industrial revo…
rdc_gtcrp80
G
Wtf shut it down,a.i. is only hope to save humanity or take it over either way,w…
ytc_UgxrzfEMP…
G
Robot:just one more box... Human:Mr.claw you need…
ytc_Ugx6an4sn…
G
A couple of years ago I was watching programs about AI, where it was stated that…
ytc_Ugw8VAijd…
Comment
In my view, the "Chinese Room" is NOT POSSIBLE outside thought experiments. We cannot use it to say AI doesn't understand, as that assertions is not scientific once you analyze the thought experiment. Its' core premise is entirely flawed. There is no 'book of phrases' that could ever hope to convincingly make the person in the room seem fluent in Chinese, it would have to be nearly infinite in size, containing every possible response to every possible combination of questions assemble with language, or the user would have to spend an incredible amount of time, essentially learning Chinese in the process.
Prior to LLMs, which are the thing in question here, even our best language translation software could not convince a fluent speaker. So you cannot say AI is the only example of a Philosophical Zombies. Philosophical zombies do not exist outside thought experiments either. There is no example of any such thing in our natural world, and AI cannot be the only example, or we are proving nothing. It's not a comparison.
youtube
AI Moral Status
2025-07-10T15:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyw8YT20Q93sMTVqNd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHj_F15n6L0vIlwl94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGSOXZqbDo-VM41fR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwSCBfrFaxcq7IkS114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzE_xZAPqlnU0jeMXN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzaevmGpm74yW6ahp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFQAV87LRPzQMo34V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzsm2L8syPTfvMgXY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY7r-hzdYKAnqrklp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgznkdvbfX0SK_mfQLt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]