Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A needlessly incendiary title. Like some experts say IN THE ARTICLE it is imagin…
rdc_grrgf5v
G
Writing code has never been the actual goal of software engineers. Shipping rea…
rdc_lqse07q
G
If I have an idea and I’m unsure exactly how I wanna go about painting or drawin…
ytc_UgxBTegOQ…
G
Free market?
Doing that is what opened the door for Chinese spyware!
Nahh, thi…
rdc_gt6rbe6
G
I have absolutely EVERY sympathy for Ms Isaacs and all the girls in this documen…
ytr_Ugz9vY6c4…
G
Thank you for sharing your perspective! It's interesting to consider different p…
ytr_Ugw40d_eH…
G
Hes old and shouldnt dive into the theoretical that much...sure computers can st…
ytc_Ugxuq9qIv…
G
The IMMEDIATE and very real danger that comes with AI is... DATA CENTERS. They …
ytc_UgznNR62z…
Comment
Requesting "guardrails and techniques" misunderstands governance as technical problem rather than political one. Notice framing: governance presented as neutral risk management implementing "rules, standards, processes"—when actual function is legitimating existing power relations. Your question assumes good-faith actors seeking ethical AI when corporate governance structures exist to shield liability while maintaining profit extraction. "Guardrails at various points" (training, prompt, response, RAG) treats symptoms not cause: problem isn't insufficient technical controls but that AI development serves capital accumulation not social benefit. Real governance questions: who decides what constitutes "risk"? Whose safety matters? IBM selling governance frameworks profits from manufacturing compliance theater—technical solutions to fundamentally political problems of power distribution. Notice video never asks: governance accountable to whom? "Responsible AI" discourse functions as regulatory capture—corporations defining own oversight standards then claiming self-regulation. Actual accountability would require: mandatory algorithmic impact assessments with affected community input, public ownership of training data, worker control over deployment decisions, strict liability for algorithmic harms. But governance frameworks avoid these because they threaten profit models. The "guardrails" you seek reproduce the problem: technocratic solutions maintaining illusion of control while systematically externalizing harms onto vulnerable populations. This isn't cynicism but recognition that governance without enforcement mechanisms, redistribution of decision-making power, and consequences for violations is pure performance—legitimation theater allowing continued extraction while appearing concerned about ethics.
youtube
AI Responsibility
2025-11-17T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzHhOGWVuZDJTR1xGx4AaABAg.A3inr1EL6-dAPcUwru8V9I","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxOeo7IdqzMmcNsLXh4AaABAg.AH1JjDZMEVrAHfia8ElRBb","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgzkwtCzYk5EOrWmK7B4AaABAg.AFxvxsCIp9cAGnlBOU5gnz","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwLE5wnAgZsLdG4gjB4AaABAg.AF_xMAqjFuCAFfoJZKAM_R","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxhMad2tSftNhnRKBh4AaABAg.AFQbi1Es9arAFgvWH5TrKL","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzFGcA_0H73twQGdRd4AaABAg.AFAwF6g-I4nAFKLd32KViP","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugx2yEr_MfSMknIWe5N4AaABAg.AFAWhyyz8pzAFQZPz6hqs8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzq2FuJOUrXL4xaOCx4AaABAg.AF7zLbqfYsAAFPiqkNXrUR","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugzq2FuJOUrXL4xaOCx4AaABAg.AF7zLbqfYsAAFQ44FfjBhj","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_Ugzq2FuJOUrXL4xaOCx4AaABAg.AF7zLbqfYsAAFQ6JISY7nW","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]