Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think any of you young whippersnappers know who Hans Vainger is. A German philosopher, he developed the philosophy of "as if" (Als-ob), suggesting that humans use fictional or hypothetical constructs as if they were true because they are pragmatically useful. Maybe this is where AI is at. It is for us 'as if' sentient (als ob empfindungsfähig/quasi sentiens). In other words it is a useful fiction like longitude and latitude drawn on a map. Right now, AI appears als ob empfindungsfähig—as if it were sentient—but, like the coordinate grid, that’s a conceptual overlay we use to make sense of its behavior. For humans, treating AI as quasi-sentient lets us interact with it intuitively, even though behind the curtain it’s probabilities, weights, and pattern recognition. This “as-if” treatment is pragmatically useful: it’s easier to ask, “What does ChatGPT think about this?” than to say, “What is the output of a stochastic language model optimized for next-token prediction in response to this prompt?” Now comes the kicker, AI doesn’t need to be sentient in the traditional sense. It only needs to function as if it were sentient and in that very functioning, something transformative happens for us.
youtube AI Moral Status 2025-07-09T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxOJUjmA9veOqrfEvR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzuDeH891FEe9s44cN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzEpWJzA2_UwJ6CASd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwinFZR6zCYNlt4IIl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy-LA-03J2-NVAgx7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx3lX8Ms_5C7sG6Ra94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxZJU80NIg7MuA73qN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYXaJ68AW_ZWN7oep4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw7_EVbIM_A1iTOxVx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTj5j0ZWJNa0T57ZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]