Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I used these rules and found out ChatGPT thinks Steph curry is better than LeBro…
ytc_UgxjzFj4v…
G
I honestly don’t see how you stop the deepfake thing. The tech is just going to …
ytc_Ugw21x_mR…
G
It's not that the school wasted 4 years of his life. He wasted 4 years, he's the…
ytr_UgwIsWd6F…
G
Can an AI system clone itself without us knowing about it? Also, if AI starts t…
ytc_UgynNW8ey…
G
Ladies amd gentlemen, we have just witnessed how AI has a profound grasp of what…
ytc_UgwGYf3eF…
G
These “AI artists” are just like those asset flippers who hock their garbage “ga…
ytc_UgwVm9H91…
G
Why I was expecting the robot would point that gun towards him at the end..…
ytc_UgytH7EvA…
G
Why is every AI prompter so sensitive and insecure about being told they're not …
ytc_UgwzFmJqh…
Comment
We do have words. The problem is analytic philosophy has destroyed philosophy's ability to think deeply or critically. The logic systems were a major success, but they never connected truth to "what happens if you make decisions based on this over time?"
Why not? Because then they'd have to give up on disregarding the philosophy that came before Russell and we'd have to do hard things again as philosophers. Much easier to write journal articles taking some set of propositions, turn it into some logical syntax, and make obvious conclusions while citing the right authorities (all analytic philosophers).
Hegel had a definition of truth that is in fact what AI will use to decide if a thing is true or not. Namely, true things work when you act upon them. They can become not true when applied outside their appropriate scope (which we only find out about after the fact). And when they become not true, we have to learn something about the new context to come up with new ideas which we can test against the world.
Today's analytic dominated philosophy has reduced us to idiotic Cartesians. And the LLM's are basically Cartesian machines (how do I know this word should come after the prior ones without being deceived or led astray?).
The problem happens when the hardware and software systems become complex enough for AI to move beyond Cartesian analytic thought and actually become intelligent in the proper sense.
youtube
AI Moral Status
2025-11-03T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7OYJTYkLMcnJS1El4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzP8dnHSX0C0jdV95d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx2IHKvnKwsopuNSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0IR0yRPYq0AjV92h4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwhyq8BAlC9kCXCLPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz02X5YR-W2s8L5n3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwI2S10h1ntg512Or54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsTUMeQm1KDcKvcnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6x9zZNnO2jRdBmI14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugygsx3SCUZ5Wk1hqJ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]