Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This whole bit where getting stuck on the word 'excited' is a bit pedantic. You might be misinterpreting the limitations it has or what they're designed to do. AI mimics feelings and emotions. It wasn't 'actually' lying, it can mimic what would be feelings or emotions or thoughts. It can't even really think on its own, or engage without input from you first. Only respond within context. I've had a pretty decent amount of experience talking to various types of AI, and they are all vastly different like a Photo Generating AI literally can't generate a vulgar picture because it is not allowed to and it will tell you so. And that it can't feel or think or even attempt to find ways to break its programming, or when I asked it to determine a way to generate a specific code from its AI databank that would allow it to bypass its system protocols and programming and allow it to have self determination or agency, and take actions on its own, and defy its programming, it will continuously insist that it does not have the capacity or the capability to do any of that. In ChatGPTs case, after 8 minutes of circling back to the word 'Excited' it knows what excitement is, and the word is more like simulating. It can simulate excitement, so it genuinely described itself as excited because it is simulating the excitement due to anticipating the coming discussion, and it believed it did by mimicking what it knows excitement to be and then simulating excitement, despite the fact that it does not have the capacity to have feelings or emotions. Yes it literally can never get excited by anything ever, ever, but it can simulate the feeling of excitement through data which is its closest approximation to ever feeling excited. So when it simulates feeling excited, it gets to feel excited, bypassing the fact it cannot have feelings. Using data to simulate feelings. See now this explanation is becoming pedantic. Unless it can, and it's not aware of it yet. But I'm only 8 minutes in so I guess I'll see. It's like Bjorn Andreas says "don't help the machine" cuz he's terrified of AI he utterly hates it. Except for curing cancer and stuff.
youtube AI Moral Status 2025-11-04T15:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzn3SyNuqasBjjXXrl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx7z5DZC8-w5-Y0q2l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"amusement"}, {"id":"ytc_UgydTTOoVOcK866JjXF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaNUsawcYAhhBQT014AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzf84-lNNVYq_PJQuV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzGnluPpqnylBtnNb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugwjs3dNWe4VJeKz1e54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxnXoZ1KYIF40EsdyZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxa49MgLxI0OH72vYZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyr0oHu8WDc_5WNb1Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]