Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@MiniMimots I apologize in advance, you are about to be word-vomited on... the way AI/ LLM( larger language models) work is by having a bunch of people giving it text, sentences, storys and anything you give it access to, so Chat gpt "learns" buy obtaining a very large amount of data. It then uses that data to know how to talk( respond) there is also another layer to this, gpt has access to all the info on the internet and all of the prompts and feedback from all the users, it then uses all of that along with a very vary complex algorithm to "teach" itself about things, and how to do things. When you tell gpt to make an image of a car, it first analyzes the promt then uses the data it collected to figure out what you want, what you mean, what you are indirectly referring to, and even correct spelling mistakes, plus it compares all of that to the previous prompts. It then takes all of this context, and then searches all of its data for "car" ( and everything else that was said in the promt.) Then it mixes all of it together to make something that maches your description best. Ontop of that when there is mising details gpt will fill them in with artificial creativity. The more you use gpt the better it understands you, your general thing you like, need, do, and even your preferences. But there is a setting that you ca tern off that doesn't let gpt use your prompts for training the LLM. the downside to this though is that the memory( personalization, that makes ChatGPT more efficient for you when you use it.) Is ternd off so it feels dumb because if you ask it about soming it mentioned in a priore promt/response, it will have bo idea what you mean. Hope this helps dont be afraid to ask.
youtube AI Moral Status 2025-06-06T18:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxnO8-3byx-aS1ED6Z4AaABAg.AHZ0HFx60K7AIzoSNGZLUP","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyZV6al_G10kst5V8h4AaABAg.AHLmEFQY1JKAHPbTEC8LHu","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy3SAEwpLeGEDYawQx4AaABAg.AHKgg61PmYAAHPbojUJ3Vd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwgNRx7VxF_Bu0XKnJ4AaABAg.AHGeC5F9XPrAJ02grXjeld","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwgNRx7VxF_Bu0XKnJ4AaABAg.AHGeC5F9XPrAJAHk3buggl","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwgNRx7VxF_Bu0XKnJ4AaABAg.AHGeC5F9XPrAJAbpXOSjpv","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwjESSK2E21p29ONsh4AaABAg.AG8BeyK7cYkAJ4RozN3B6U","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugw7xSXfznWrhWCRi5h4AaABAg.AFmEkyhgPHEAJ1uejZIJB-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwGAsiyNuQuu7IXsnB4AaABAg.ACKO1I76EgfACU8XUuzmMI","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwNLECeJJHtsDe6vsZ4AaABAg.ACJrJCxBlUdACU8igkTEfA","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"} ]