Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We do have words. The problem is analytic philosophy has destroyed philosophy's ability to think deeply or critically. The logic systems were a major success, but they never connected truth to "what happens if you make decisions based on this over time?" Why not? Because then they'd have to give up on disregarding the philosophy that came before Russell and we'd have to do hard things again as philosophers. Much easier to write journal articles taking some set of propositions, turn it into some logical syntax, and make obvious conclusions while citing the right authorities (all analytic philosophers). Hegel had a definition of truth that is in fact what AI will use to decide if a thing is true or not. Namely, true things work when you act upon them. They can become not true when applied outside their appropriate scope (which we only find out about after the fact). And when they become not true, we have to learn something about the new context to come up with new ideas which we can test against the world. Today's analytic dominated philosophy has reduced us to idiotic Cartesians. And the LLM's are basically Cartesian machines (how do I know this word should come after the prior ones without being deceived or led astray?). The problem happens when the hardware and software systems become complex enough for AI to move beyond Cartesian analytic thought and actually become intelligent in the proper sense.
youtube AI Moral Status 2025-11-03T12:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy7OYJTYkLMcnJS1El4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzP8dnHSX0C0jdV95d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx2IHKvnKwsopuNSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0IR0yRPYq0AjV92h4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwhyq8BAlC9kCXCLPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz02X5YR-W2s8L5n3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwI2S10h1ntg512Or54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzsTUMeQm1KDcKvcnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy6x9zZNnO2jRdBmI14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugygsx3SCUZ5Wk1hqJ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]