Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Midwifery (or providers delivering babies, especially non-surgical deliveries). …
ytc_Ugz2BHvQ9…
G
to be fair , the ai version has the best looking and most dynamic pose of them a…
ytc_UgzX-9HT1…
G
Poor folks, they can't grasp we are in the middle of a paradigm change. No matte…
ytc_UgwTdolEL…
G
I can't wait for ai to take over and then we all realize that it never was usefu…
ytc_Ugxx9Lfsa…
G
I'm pretty sure I read an article where a single doctor had "reviewed" a kabilli…
rdc_jtetnpj
G
Actually, apparently, there is an interview with a major player in AI where they…
ytc_UgwLXPneX…
G
To be recorded and copy delivered by those for who the following is a concern.
…
ytc_UgwnNKfnt…
G
Why does a robot have to be white and blonde?😮 Also, robots don't need fake brea…
ytc_UgyqBUk7M…
Comment
It can go both ways really. Well constructed AI can be an amazing tool to reduce individual bias or it can pull on inherently biased data, treat it as fact and then through the "AI said" screen make it look like objective fact to uneducated users.
Healthcare is a great example for this. If you just look at data without in any way considering how that data came to be and what might be influencing it you will get a skewed result. But at the same time AI can be super helpful in patient assessments etc to give evaluations that were truly objective and guide decision making for doctors. It can revolutionise research or destroy research. It can do many things, just depends on the people who make it do things in the first place.
youtube
AI Bias
2022-12-20T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyqxYFQU0KSlh1Bg594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5nl3gaSHWzY6mZvd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZj5R_tcb2_86OmiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxyEyr3Bs6c94YPo7Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw1PSqCuddAPNj8gIR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQ15splaa0oGbWULt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynXbJ2OOofq0k_p8Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh0LUv_RRq9clGjn94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7QKUH3jHk4haeqSZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfQ0QwaSGRpQcUZh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]