Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Right now, students do have a robot, they have to walk around with a laptop ever…
ytc_UgyNtmrcX…
G
There are several good alternatives. InkBlot and Artfol have appeared. They have…
ytr_Ugy05hylT…
G
Okay, why would the guy get into a driverless car when he has a plane to catch??…
ytc_Ugw37TcFf…
G
Because our capitalist society is build on consumerism and property rights. If p…
rdc_kqtqkvk
G
I am okay with this. I need AI in my life purely for one reason: medical researc…
rdc_n5ecbe7
G
You can put information about yourself (or whatever) in the system prompt via th…
ytr_Ugxa8R2oa…
G
Likeley Globslist Will Be Behind Wiping Us out and Using AI as The Skapegoat da…
ytc_Ugx2W-TS3…
G
Thank goodness, I no longer have to worry about global warming. AI will solve it…
ytc_Ugx33P-hG…
Comment
I disagree with the characterization that any of what was shown here would qualify as an emotion at all. At best, you could argue that due to humans having emotions and those directly tie into our use of language, it can mirror the concepts of emotions. As such, you can effectively map any set of words (individual or with context) onto a map of emotions. Given that LLM's work by mapping language based upon relations to other words/phrases, this is an entirely expected behavior.
You could do this exact same technique (I mean *exactly* the same) with the concept of "engineer" and modify the activations to make the AI act more or less like an engineer.
One could argue this is a "meta context" as opposed to context. You can emulate the effect via simple context changes. Where this is more powerful is that you can use it to maintain certain traits consistently rather than just until the context gets too large.
That said, this is why it freaks out so much when the activation is just set to "on" for a certain concept. It creates a positive feedback loop where the activation increases every loop. Again, you can emulate this via context manipulation. So, *any* concept that you set to multiply the activation (Or even add to it) will result in a consistent increase of that activation over time.
youtube
AI Moral Status
2026-04-08T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzEXBVqTjWduT6GEp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkQRQ68KNA_xqR0_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwi2ssGYQ6dPwFCTQR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3PMLiSG6gGuMb11R4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxUriqeoKJxU6n7c54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzFkc5a5_Y4oOkdIdN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXi4NvJplDsKClklR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4TD2_mEyDvPiDJXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw8zDmKi1znIS0wvNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxCSGl-b3b62MRjN5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]