Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@helpfulbot123 You’re correct that Asimov’s 3 Laws are fictional but that actually supports my point, not yours. The laws weren’t intended as real engineering rules; they were a narrative device to explore ethical dilemmas. The mistake is assuming that because those fictional laws can’t be implemented, AI safety itself is impossible. That’s a false equivalence. Modern AI systems don’t rely on anything like Asimov’s laws. Instead, safety today is built through: alignment techniques, human-in-the-loop oversight, restricted access and capability controls, and formal safety evaluations long before deployment. So pointing out that the 3 Laws are fictional doesn’t weaken the argument that AI can operate safely—it just shows that science fiction isn’t a technical manual. If anything, Asimov’s stories proved the importance of designing robust safety systems, not the impossibility of them. Bringing up the 3 Laws isn’t about saying ‘AI is inherently safe,’ it’s about highlighting that the idea of built-in safeguards is not new, and today we use far more realistic methods than a 1940s sci-fi framework.
youtube AI Harm Incident 2025-12-02T05:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytr_UgwXehYcHvaZNgWeDYx4AaABAg.A6DqJAm5CchAGNqkIle6JJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwXehYcHvaZNgWeDYx4AaABAg.A6DqJAm5CchAGt6HgxI64R","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyDjgzksTIgROYaVhB4AaABAg.AI4srZc4z9lAI9vE7opgqb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy6iieHsyYLdbXdPpd4AaABAg.A7DNAlcwm9DAPfnonLzD4z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyOdCGlCJWfWl4TXRR4AaABAg.ATT2S9TY8jbAU5OGqZpchZ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzEUtnJ3tMncV0LLfR4AaABAg.ASeWtOaiE9WASlH03b2ugA","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxDazRwDamVz-3NsVZ4AaABAg.ASMniOb9vj5ASMoUz1JXcZ","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugyj1agzqS_Fo8Msr7J4AaABAg.APc6B5AqxALAPjvvq0QWRT","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_Ugyj1agzqS_Fo8Msr7J4AaABAg.APc6B5AqxALAQDT2FZG4Ks","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytr_UgwC0H8_W3Io328c4PF4AaABAg.AP9Pj4P1o4RAQ1YyybcfGa","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]