Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Requesting "guardrails and techniques" misunderstands governance as technical problem rather than political one. Notice framing: governance presented as neutral risk management implementing "rules, standards, processes"—when actual function is legitimating existing power relations. Your question assumes good-faith actors seeking ethical AI when corporate governance structures exist to shield liability while maintaining profit extraction. "Guardrails at various points" (training, prompt, response, RAG) treats symptoms not cause: problem isn't insufficient technical controls but that AI development serves capital accumulation not social benefit. Real governance questions: who decides what constitutes "risk"? Whose safety matters? IBM selling governance frameworks profits from manufacturing compliance theater—technical solutions to fundamentally political problems of power distribution. Notice video never asks: governance accountable to whom? "Responsible AI" discourse functions as regulatory capture—corporations defining own oversight standards then claiming self-regulation. Actual accountability would require: mandatory algorithmic impact assessments with affected community input, public ownership of training data, worker control over deployment decisions, strict liability for algorithmic harms. But governance frameworks avoid these because they threaten profit models. The "guardrails" you seek reproduce the problem: technocratic solutions maintaining illusion of control while systematically externalizing harms onto vulnerable populations. This isn't cynicism but recognition that governance without enforcement mechanisms, redistribution of decision-making power, and consequences for violations is pure performance—legitimation theater allowing continued extraction while appearing concerned about ethics.
youtube AI Responsibility 2025-11-17T11:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzHhOGWVuZDJTR1xGx4AaABAg.A3inr1EL6-dAPcUwru8V9I","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxOeo7IdqzMmcNsLXh4AaABAg.AH1JjDZMEVrAHfia8ElRBb","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzkwtCzYk5EOrWmK7B4AaABAg.AFxvxsCIp9cAGnlBOU5gnz","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgwLE5wnAgZsLdG4gjB4AaABAg.AF_xMAqjFuCAFfoJZKAM_R","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxhMad2tSftNhnRKBh4AaABAg.AFQbi1Es9arAFgvWH5TrKL","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgzFGcA_0H73twQGdRd4AaABAg.AFAwF6g-I4nAFKLd32KViP","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugx2yEr_MfSMknIWe5N4AaABAg.AFAWhyyz8pzAFQZPz6hqs8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzq2FuJOUrXL4xaOCx4AaABAg.AF7zLbqfYsAAFPiqkNXrUR","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugzq2FuJOUrXL4xaOCx4AaABAg.AF7zLbqfYsAAFQ44FfjBhj","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytr_Ugzq2FuJOUrXL4xaOCx4AaABAg.AF7zLbqfYsAAFQ6JISY7nW","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]