Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI itself is not the problem. We are. AI is neither “good” nor “bad.” It’s a tool – just like a knife. With a knife you can cut bread… or harm someone. The difference lies in who holds it, and with what intention. 🔹 AI itself = neutral Algorithms, data, probabilities. No values, no morals, no agenda. 🔹 Humans = the real drivers – Bias: If the data is biased, the AI will replicate those biases. – Goals: If we train AI to maximize profit, it may make ethically questionable decisions. – Rules: Without ethical standards and regulation, chaos follows. The real issue isn’t artificial intelligence – it’s human intelligence behind it. AI is a mirror. It reflects what we feed it: our values, our flaws, our vision. The responsibility lies with us – developers, companies, policymakers, and society as a whole. So the real question is: Will AI amplify our wisdom, or our weaknesses?
youtube AI Governance 2025-10-03T22:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx5d1E0Wbdvy_NTTl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwONjXUEi0T3kBq_qF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwoumtwLCvjk4LqNI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwMVtVad2rk1lajCot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgznlbAHr8zMFwIfkUd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx5z67-W2ptRQEeZOB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxeghA6eMmCgQObyv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8QrI0Lamr_0vvudt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugw8j8H2g0sTJ7gNhQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxAJvcd41YfQfmK46l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]