Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This really sounds like a non-white, non-man problem. Perhaps we need a completely different type of people making and training these AI's? They are going to better understand the potential likely bias datasets they are training the AI with.
youtube AI Bias 2023-01-01T01:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx_sazOD-mcHGdMH3p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKbvrnMBJZwdmL0sV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgxYLiBsGGqcrDJcaEB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRNcuD5kXHe2ITyx14AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy_QmAHH2J93017X4N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxpPGrPW6Xgp9MeBH94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw7Xxes6Ya99rFueAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw5_ZK6HX9wcdBg5gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxbDuarwFpXKU0qDot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzii1tQOxTPz0ZP8h14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]