Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree with the characterization that any of what was shown here would qualify as an emotion at all. At best, you could argue that due to humans having emotions and those directly tie into our use of language, it can mirror the concepts of emotions. As such, you can effectively map any set of words (individual or with context) onto a map of emotions. Given that LLM's work by mapping language based upon relations to other words/phrases, this is an entirely expected behavior. You could do this exact same technique (I mean *exactly* the same) with the concept of "engineer" and modify the activations to make the AI act more or less like an engineer. One could argue this is a "meta context" as opposed to context. You can emulate the effect via simple context changes. Where this is more powerful is that you can use it to maintain certain traits consistently rather than just until the context gets too large. That said, this is why it freaks out so much when the activation is just set to "on" for a certain concept. It creates a positive feedback loop where the activation increases every loop. Again, you can emulate this via context manipulation. So, *any* concept that you set to multiply the activation (Or even add to it) will result in a consistent increase of that activation over time.
youtube AI Moral Status 2026-04-08T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzEXBVqTjWduT6GEp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzkQRQ68KNA_xqR0_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwi2ssGYQ6dPwFCTQR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3PMLiSG6gGuMb11R4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxUriqeoKJxU6n7c54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzFkc5a5_Y4oOkdIdN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXi4NvJplDsKClklR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4TD2_mEyDvPiDJXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw8zDmKi1znIS0wvNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxCSGl-b3b62MRjN5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]