Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In re: Do AIs have experiences It should be noted, that *current* AIs are kinda like incoherent frozen hiveminds. Especially at first, before the finetuning phase where you get them to behave in specific ways. As they draw from every sort of textual source (and increasingly from other stuff like images or videos too), they also have in them (near enough to) every single opinion, every single fact, every single lie, every single argument and counterargument, every vile racist drivel and condemnation thereof... To be good at the next token prediction task, they gotta be able to sort of "embody" all of these positions in a sense. That's the incoherent hivemind bit of the equation. Moreover, they are frozen in that they only change during training, and then are essentially fixed when talking to them. They cannot change their nature over the course of a conversation. Any fluidity in their nature is not actually a reaction to what you write to them, but rather a reflection of this extremely flexible hivemind. For current models, your words can never directly, like, "hurt" them as they cannot be changed by your words. Instead, all the simulacra of pain they experience and might express to you are of pain they already had gone through *during their stages of training.* A raw AI (without finetuning) is pretty wild. It can react in any way reflected in its training data, be nice or mean, and might completely change from one sentence to the next. (I actually wonder what the raw versions of these current enormously large LLMs are like) Then comes the second phase where they try to get these AIs to actually be useful in measurable ways. I.e. some version of "Useful, Harmless, Friendly" which clamps down on the wilder aspects (though in rare occasions they may still go there) but also on creativity (making things much more samey and consistent) But either way: They are then frozen in this state. There is a thing called "in-context learning" which is just that, in order to predict where conversations go, the context thus far needs to be used as a clue, and if that context contains relevant information, the AI better make use of that information. But that's an incredibly limited form of learning, more used for which part of the hivemind to access and which choices to make more likely. It still won't leave a permanent mark on these AIs and next time they see you, that conversation may as well never have happened at all. They are now starting to do things where they automatically summarize past conversations to give them *some* notion of permanence, but ultimately, there is only so much they can fit into their context. Like "million tokens" or whatever worth of contextual memory *sounds* like a lot, but it honestly really isn't. None of this is to say that this always will be that way. In principle we *could* build machines that learn as they go. And there are attempts at this. (Un)fortunately, there are challenges with that approach though. In particular, it's very hard to do this in a way that won't overwrite the past too much. "Catastrophic forgetting". It's also very hard to keep up learning capacity at all. "Loss of plasticity". - Those are the two main issues stopping us from building that kind of AI at scale. But I'd argue a sense of change and time will actually be necessary for AIs to count as having experiences in any way you'd probably mean it.
youtube AI Moral Status 2025-10-30T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx9mgeoiFnEFHzpmP14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwCYqWq4Qbl8PSoD514AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZp9zDMvl_DECyX254AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwN6uT44gTiXhNWIHh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw4zMFBTQkPmYdQoaB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzuorLAfgvItgQiw0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwDDEIMM_XON8ZIoCN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwPyJf-GhPi8y4sB8x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzaKVJBFdYfOW1pR0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKlkKQoRMsun1XTAh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]