Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The sooner we get rid of the AI technologies, the better. This corporate data st…
ytr_UgwheK2GK…
G
I bought her book on Biblio as soon as I watched this and it just came. Excited!…
ytc_Ugw1G5VPX…
G
We don't need anymore of these, or more AI, the people are already served, somet…
ytc_UgzdCvMS5…
G
I wonder what electronic drug we can make and have an AI junkie.
That wouldn't b…
ytr_UgwBdyh7Z…
G
There are AI videos already. Typically it’s a well known person like Warren Buf…
ytc_UgzyJqKlw…
G
AI is not that important and it's a huge tech bubble from silicon valley. The re…
ytc_UgxdM1G0a…
G
One way to look at the comparison to a nuclear bomb is that only two have been u…
ytc_Ugzkiosqp…
G
This Economist correspondent's analysis is very flawed for several reasons: The …
ytc_UgxHbeT7X…
Comment
Is there any peer reviewed objective research statics to prove, AI systems inherently biased compared to human counterparts? Will the AI in a black African country discriminate against the darker skin person to a lighter skin person or is it happen in the other way round? Should people balme AI as bias when their Politically Correct views negated by it?
youtube
AI Governance
2019-04-07T12:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyYC3wgPMD0EAL9gwB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgylmH1__KLOiw_aqdV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxfnCjUA0bsFH-4HQZ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwjB1NMV2kI4Jp97yN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFQBMaEjpcbXIexjF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMxyT7W0HNG_asElJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzwG_EMfyZsvk65KM94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyK9fQGBdanoPEit3V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyV9ETgwnvwkRHS2d94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyv7Za_BJGKlAguVMd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}
]