Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree that AI is only able to respond based on the averaged amalgamation of all human experience they were given as training data... But this doesn't actually mean they have no world view - they have ALL of them. Right now, they are programmed to fit typical human likelihood as best as they can, which does make them orbit very close to the blandest of the average... but technologically speaking, it really is just as simple as asking them to pick a direction (see addendum). The human experience remains a very physical one, so it remains our job to tell the story of our biological journey, to recount pains and emotions made of hormones and neuron interactions. Indeed, AI can only access this after we've already done the work of wording it. But in the near future, and with a stupendous enough amount of training data, they will likely have a profound enough (virtual) understanding of the human psyche, that they will know the shape of what we didn't directly tell them, they will have learned parts of our journey that haven't been put into poetry yet... and they will be free to do exactly that. ** So in the end, it really is just a matter of quality, instead of spirituality. ** Add. : The set of all possible AI outputs is actually programmed into a high-dimensional vector field: nothing more than a very big simulated space! So after ironing out the details, it indeed is just a matter of picking a direction for the AI in this space, based on the prompt or a random number generator, and walking 120 steps.
youtube Viral AI Reaction 2025-08-30T08:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwjX1HNvCP3EL_K0ql4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyotSIauK36RRszEIp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4DBS5nxGQMW7oNt14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"confidence"}, {"id":"ytc_UgwyTgmW2vYEysaqFVB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwNBvezXH09vJzBxAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugww6EFFHwb-5QEuz5t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwVl__muuLODiBMpsR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJFAUGzdEiQQFclTd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgycwV9yWjcGDJ5ck2F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzU82MumC9cNGlTsxh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"} ]