Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@friiq0 You have a point in the sense that complex systems can be unpredictable. However, I believe the reason that humans like ice cream so much because it contains components that we DID evolve to seek out, particularly sugar. Sugary food, and food in general, was much harder to obtain before agriculture, only widely available in the form of fruits, which give the body quick energy (from the sugar content) but also contain vital nutrients. If we ever came across fruit, it was an opportunity to take full advantage of. So nowadays, when we taste something sweet, our bodies still think "FREE ENERGY AND NUTRIENTS!" and signal us to stuff our mouths full, even if that sweet taste is from ice cream. From what I understand, AI is similar; it doesn't really act independently from its training data, but it may act based on its training data in ways we don't expect. That can be limited, though, if we are more cautious about what data we feed it and how. For example, imagine we changed human history such that the sweet foods available to us while we evolved weren't any more or less nutrient/energy dense than anything else. Suddenly, there would have been a greater incentive for our bodies to distinguish actual nutrients, rather than signals like sweetness. Similarly, if we tweak the data we feed AI, we can push it farther from unwanted behavior and closer toward the goal behavior. It's not necessarily easy, but it's simple in concept. Currently, we train AI on all the data we can gather, and then tell it via text prompt how to behave after the fact. Doing that leads to it reacting in the way that people in its training data reacted to similar situations. But, if we train it positively on the reactions, thoughts, sources, decisions, et cetera, that we find moral and/or reasonable, while training negatively on that which we find to be the opposite, my bet is that the AI would end up at a much better starting point. Do you see what I'm saying?
youtube AI Governance 2025-08-28T06:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxUF3KUrbPkqfgeKXN4AaABAg.AMKMBPh5FEmAMNz_nkKoWA","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugwu5JxiALw9fLe0qXp4AaABAg.AMJkp6PGCDYAMJmR9H6Fpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzmpdqyvUOQaNEoGH14AaABAg.AMJipi46S2wAMJmssiUlxx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugw5h866M3pjxJy-o-Z4AaABAg.AMJfUiIBN6OAMKVUlGnxCO","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy4otCoxcyQOrckVSF4AaABAg.AMJavUTexRqAMMMVRKbM3A","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxiArPGzLjLsyh6b9R4AaABAg.AMJWTdGCeUBAMJo8llroaX","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxlEhJ6oyNpH_56Otd4AaABAg.AMJTVF9XIHcAMK96Wz8iSD","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxlEhJ6oyNpH_56Otd4AaABAg.AMJTVF9XIHcAMKctpX9Nxt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxlEhJ6oyNpH_56Otd4AaABAg.AMJTVF9XIHcAMKoKDG-th3","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwfRsSLEsK_I05JcyJ4AaABAg.AMJSxNltr9nAMJtVOY2FXt","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]