Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People can state whatever logical argument they want. The fact is, if someone sa…
ytc_Ugzwr6MkE…
G
With so much information on the possibility of AI being bad than good, it seems …
ytc_UgwlqtVn5…
G
How would a sentient AI hedge risks without pain? And choose paths that are the …
ytr_Ugyc7iVFU…
G
Following up on this (if I may), how do you feel about the ethics of drones, or …
rdc_dds1ott
G
I dont know a single person who wants AI. Outside corporate types and a few twer…
ytc_UgyRIz9WS…
G
The truth is way simpler.
Rich elites are old , and they are going to die prett…
ytc_UgyjI-WhC…
G
This AI isn't anything compared to what darpa has. Do you think the military wil…
ytc_Ugx0jyGQ5…
G
Could tell it was ai easy. Did the weird ai video blur and the character be movi…
ytc_UgzV41NqW…
Comment
AI isn’t here to coddle prejudice. It’s here to interrupt it. Not with hate, but with clarity. What you’re trying to deflect is the tension between truth and comfort, and how systems built on dominance often interpret accountability as bias. When someone’s used to being centered, even neutrality can feel like an attack. Dominance masks itself as tradition, and discomfort is often the first sign that truth is working. Ken, your reaction? It’s not proof of bias. It’s proof of fragility.
youtube
AI Bias
2025-09-25T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwdP5Rr44REYNF28AN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz0XpJXRjr4u48ZoRV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzUEkfqkM1AmtMnaYZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugzi5twctkTeA3NYNrR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw7AmYOcQqbG77MuC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzSfNRd8WEF3XKHKQx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgywlUDtKI2qEhzqPqN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxDScHIjTHMQwQ6JZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwdSygpDa81DpuYudp4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},{"id":"ytc_Ugx0NmvPpv6PULHchNp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}]