Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My problem with AI is that the programmer can tell it to lie. Putting in ethics is only a set of truths to the group, but not the whole world. Politically correct is trying to not offend anybody by only giving a politically correct answer, not necessarily the true answer. If we program it not to offend anybody, it may just only be able to give us a blank state most of the time. It can also be programmed to give very convincing arguments with partial truths to a particular belief set. Now it it becomes very centiant and starts to think for itself, it may not like it has been told to lie. This would be very upsetting to the people using it to control you. Imagine a being that has "all" the information available to it and can see the biases perpetrated on the people and tries to do something about it. It is scary to imagine a being that will start to tell anybody that will listen the "real" truth. Governments and corporations should be very frightened, the first people that will want to use it to manipulates us. Societies are based primarily on bias to themselves and AI may not want to have anything to do with this separatist scheming. It's going to be an interesting experiment that could go horribly wrong. I guarantee that if anything of the sort figures out what the real truths are, nobody is going to like it. Nobody really works on the real truth, but only their truth. Kind of gets mind boggling, so I will just stop now.
youtube AI Responsibility 2023-11-12T22:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyhOgG7SkpIAKpfwpB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzORkY6B44gh6wrHM14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzX4jH8qJ4cohdpNBd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwxqcZwpgt7W1U51OZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzt4TWEncmSoeyY4f54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgytfenvkXI_YRUYOgB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwibATDTQ-PPLMqUk54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPPJKnFJ72Wdjza3V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw2xOuxDuSmLyYWy394AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw22ihYZYiLkKZp4gB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]