Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Professor Gary nailed it right on the head of the nail. I asked chat GTP for about trans phobia, and gender dysphoria and the fact is there two genders or is there two sexes, and the results are quite astonishing. For example, it said when I asked how many sexes do mammals have it said two I then asked how many genders do mammals have, and it said two except for humans, humans have infinity genders. Then tried to get it expand on that And it kept giving me the same answer. Basically that humans have multiple genders because they can reason and understand complex issues and dogs can’t. I then challenged the chat bot that identity is a thought a mental representation of one’s status and not a physical reality. Once again I got the same canned answer. Finally, out of frustration in my attempt to get an unbiased or objective answer I said thank you and goodbye. To my surprise, it answered “we” thank as well. That caused me to ask it ask it. Why did you say we instead of I? every other chat session the bot always spoke in singular. In response the gpt4 answered me that it was a result of the input and programming of its creators and we was the appropriate response. So now I understand that whoever programmed anything about gender, obviously did it from their point of you and not from a objective point of view. Objectively if the body is not damaged and someone thinks, they’re a different gender that their body is, that is a state of mind, as even the chat bot had said earlier as a mental concern or perspective, Now the chat, but refused except this, and said it was not medical or mental rather, but that it was in fact a reality. I believe that this assumption was based on the input it was given as it admitted to me. It is very critical that we make sure that these AI systems are programmed objectively on every scenario not just a perspective of the programmer’s belief to be true. A political aspect of this would be if you had a right leaning organization, build a LLM based on their perspective of belief, the chat bot would typically come up with answers to support the right leaning perspective and consequently a left leaning LLM builder would have a chat bot would give answers that would result from more left leaning perspective. we need to avoid that bias in the programmmng and the LLM or AI system needs to be neutral completely, and using only logic not opinion. Take this thought process to elections, and depending on the chat bot that you select it may recommend one politician over another, and try to give convincing reasons for selecting that particular politician in an election. To be blatant a Trump leaning chat, bot would convince you to vote for Trump and a Biden leaning chat bot would try to convince you to vote for Biden. We need to have a chat bot or large language model that does neither, but gives you the statistics and fax on each candidate without an opinion. Taking that a step further, it should give you facts on all the candidates, regardless, of which one you are asking about so that you get a balanced answer. For those who made doubt me, you have not booked go to Twitter and Facebook to see how a human beings beliefs can affect the algorithm of the corporation . Those algorithms are polarizing people and tend to send those people to like minded people as a result. One of the reasons why we seem to have more divisivenessnow than we did 10 years ago. Social media is representing a biased perspective, which tends to leave the observer into believing that that perspective is the truth.
youtube AI Governance 2023-05-20T19:2… ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzZm19vwQkQl6KV7xd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxKdWzkSOpeCVgKj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAqod00ZRxj9dEJoF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxPEUo1WRd7GVEJD_t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxnkg9Pu9pk0dBSzGB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRcfh48Mk3g7ovLcJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGETzKrBCwBgtdicd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxOXA6s98zdf6zLu1t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxf_TXzr9t3rRcfSnh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWjX1Oo-kwDB2eHup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]