Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@mynamebob7778 it is illegal to not treat somebody at any medical institute if they are sick, while i can't reference the same data (and i would like to see what you are referencing) if i had to guess that statistic is probably based on a time to wait data which would be higher in high density low income areas which is where unfortunately African Americans live because i would find it hard to quantify how much flu you have between the races, further more the access to health care is abismal in the US and the whole system of insurance completely impoverishes people and it's an institution that needs to be abolished. My previous point is why would you train ai on data that you don't agree with politically, which would mean that even with positively biased data it is objectively drawing logical conclusions that are misrepresented as racist. This example would represent like google bard or chat gpt and not ai chat bots like the one Microsoft released on Twitter and got it to say racist things, if all you feed an ai is the color red and then ask it what the color blue is it's gonna show you red and tell you it's blue.
youtube AI Bias 2023-06-20T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwfakyCTJRdqbOXeL14AaABAg.9vsUGjq_YCi9vuN5A9dlZi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwS4zO6Qgsbdl3bmWd4AaABAg.9vg1Byf0qpn9vhQeyls4bR","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwNwdCAmrL9FBdjRhB4AaABAg.9tDzEX_MD8d9w8mf8r9I8q","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw_NP9D91NTaNfbQ7t4AaABAg.9sS7pBbS_rm9sWQzchffqo","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx6FwQkq_DY2FFf9LR4AaABAg.9r-t2tZZ3in9r12JmUFxfS","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxSQxEpH_DR6svQVzd4AaABAg.9r-oA-rqw_f9rBDljt-DTI","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxSQxEpH_DR6svQVzd4AaABAg.9r-oA-rqw_f9rBaAW4wF59","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxSQxEpH_DR6svQVzd4AaABAg.9r-oA-rqw_f9rBc7XDwHDk","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxxmYpXbn53rI1xE2t4AaABAg.9qwPqXgrJkP9vyTMolNKN6","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyKfAy4JKinKtueURp4AaABAg.9qevhelZdXJ9vyqGotD98n","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]