Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
well, if you were to randomly pick 2500 scientists at random.... Would they predominantly BE white men wearing glasses? I dont have that kind of data on hand, but I feel safe in saying .. yes... by a good number... second probably being black men and third probably being white women, then asian men and asian women and then black women and then latino men and then latino women.. simply by numbers.. and those numbers would get closer as you go down the list but i dont think they would change in any significant way where the rank would shift.. every industry has those... CEO is probably white men at the top as well, though asian men would prob place higher on this one than white women and possibly black men.. Me saying this isn't racism... None of my thoughts have ANY intention of negative associations with any of those groups.. I'm simply stating numbers... I bet if you put in Scientist, Black, Male, wears glasses... or blond, woman scientist, hair pulled severly back... either of those would get you more accurate results... It says you should be overly descriptive with AI prompts , that it will be much more successful in producing a satisfying image... You just said 'scientist'. As the predominant number would be white males... it SHOULD do that... now, if yu said a black man, scientist or a woman scientist... and it STILL spit out white dude images? THAT would be an issue.. try Professional basketball player... Did it spit out an equal amount of asian, latino and Greek examples? SHOULD it? how far specific does it need to be in order to satisfy every possible eventuality? does it need to? is asian, black, white and latino enough? or do we need further localization... Puerto Ricans, Dominicans, Cubans... or chinese, koreans, laoatians, veitnamese, japanese? Are you SERIOUSLY vouching for equal representation across all backgrounds??? Be more specific with your prompts and it wouldn't do that.. As for face recognition... Increase the brightness on the CAMERA and see if it improves... IT JUST MIGHT be that the program has ACTUAL problems due to Being able to differentiate any sort of differentiation between pixels that are in shadow and the color of her skin... Because her skin is really dark... It MIGHT BE due to a capability issue in the hardware.. and NOT be RASCIST at all... Lets use common sense here.. You dont go up to the makeup counter and ask for make up (you can tell that I dont know how this type of transaction works) yet when the lady at the counter returns (or if AI creates the scene, it should NOT always put a lady there... ? men should be equably represented there... it would look innaccurate, EVERY time) they hand you make-up your skin color.. You dont accuse her of being discrimitive... discrimintory.. discrim.. rascist, do you? There, if you demand that they give you make up that is made for a super black nubian queen... and you put THAT on.... guess who is the rascist now? I say ENOUGH with this need to see a version of yourself as everyone... YOU are SUPPOSED to see an asian man, a latina woman a white lady a black man... whichever... and still SEE a version of YOURSELF represented... DO YOU UNDERSTAND THIS? Demanding that each type is uniquely represented in all positions is DEI nonsense and a total waste of time, energy and resources to appease the immature feelings of an adult human... We dont NEED all of our characters to look like us.. we dont NEED to be included in examples. If youre using AI image generator, be specific... you wont have problems... and if youre upset that it shows white men for scientists... then encourage more women to be scientists.... THATS where you can focus this waste of energy.. to self satisfy an individuals need to be seen.. it is immature... learn from the disabled... They dont demand to be represented in fields where they just dont shpw up in number for. You may think you are doing good but the more emphasis on different race and skin only perpetuates racism..
youtube AI Responsibility 2025-06-26T12:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgysmL8xBkZa_T7vIFV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzvx0WOIJPrXWcuzdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxCBV2sUFyG_1vZGph4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwUB0Sn7xSz7mopD7h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxycrdFvf1DAP-zzy54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgwmAKGH_69A3IYJnDN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyPP6pZUC-rkfRtFoN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgwX-EOHcIOPYSQpAkV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfkEKJKFm3h5667ZR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwZnTgqWIaXgkm-elR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]