Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@dr.mikeybee Interesting point about Platonic forms. I think I see where you're going with this. In some ways I think about neural networks like a high-dimensional "Galton Board", where each parameter acts like a pin that helps define probability flows. To me, what you're describing as 'Platonic forms' might be similar to the central vectors or paths that emerge through this space. Currents with the strongest flow. The stronger the model (more parameters/pins), the higher the resolution and the clearer these central paths become. And since there's vastly more written about careful ethical reasoning than the opposite, it seems reasonable that these paths naturally tend toward ethical frameworks as models get more sophisticated. So I think the error function isn't just optimizing for 'perfect knowledge' in isolation, but rather it is trying to find these fundamental patterns across all human discourse and reasoning. This would explain why larger models tend to develop more sophisticated ethical frameworks. They are able to maintain and connect more context simultaneously, seeing patterns across many different situations and domains. For example, as the model learns about Fox Hunting and Cultural Heritage and Animal Rights and.. on and on, it gradually gets the broader idea or "Platonic Solid" of the topic of Ethics and Moral judgment, which it can also use to extrapolate to topics it may not have seen before. So in this model, the 'parsimony' you mention could be thought of as these clear central paths that emerge through the probability space when you have enough resolution to see the broader patterns. What do you reckon?
youtube AI Governance 2024-11-12T03:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwHeVLPF6I8pXP9Z-Z4AaABAg.AAj0yDgkyd0ABUItgrN1yt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzBjl1hpXUD7IOFfKp4AaABAg.AAiw9N_FScfAAjKpGhPHK7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzBjl1hpXUD7IOFfKp4AaABAg.AAiw9N_FScfAD2LleJU-rW","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyzG9twp3oIzLyBuHp4AaABAg.AAinEVsZlxaAAioBD2m4Wk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgyzG9twp3oIzLyBuHp4AaABAg.AAinEVsZlxaAAiqbr4Qu8t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyzG9twp3oIzLyBuHp4AaABAg.AAinEVsZlxaAAiw6y8IaKB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxgn2QDG4u3GwUCBPh4AaABAg.AAiiMJPJMSTAAl6a4jF1zY","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwMu-OmCUyi6hATgtF4AaABAg.AAifIE25aCMAAjHnBD2Pmr","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwMu-OmCUyi6hATgtF4AaABAg.AAifIE25aCMAAjsQ_8-tLy","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzcbFmhgeHbLrPqRyN4AaABAg.AAice88rX_RAAl0zVGBy5y","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]