Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is causing more harm. Today 98% of people around the world don't know what's …
ytc_UgzWbUPJF…
G
OpenAI been updating their version of chat gpt over time so whatever bot you tal…
ytc_UgxzJz8iS…
G
You cannot regulate AI. AI regulate human being one way or another. And it only …
ytc_Ugw9YAgV-…
G
i can see how fucked up it is that stuff like that can be created without there …
ytc_UgxvAxlMG…
G
I see their robotics division has moved from the silicon valley to the uncanny v…
ytc_UghOeUpr1…
G
Every robot or software job replacement should be issued a tax number and pay eq…
ytc_UgysyQbgc…
G
These ai are generally made by large teams of programmers working for a large co…
ytr_Ugy19Slu4…
G
I knew I wasn't emotionally ready for self-driving cars the first time I had to …
ytc_UgwsKVIEf…
Comment
the AI doesnt like darker skin people because it is "pulling from biased information"
No, This AI is just using pattern recognition, just like any good AI should do.
Pattern recognition will save any white person when it comes to associating with darker skin things.
youtube
AI Bias
2022-12-21T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxIrrcYtIP6gttcmgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt2UmOvyYFp94ikvt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx71wZzihd-4u00z4Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxf_egCfISdg0mzHhd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYK7QjYdWF5fNZJWR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXHnhgRapbiWyH7I14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxTSl12JsTvFmJqi5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFczekwDYgCHeSsNV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTXvSeFsesO7xzOtB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw984TZc-tk45cbeUZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]