Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This interview deeply moved me. Below is an open letter I felt compelled to write after watching it — for anyone who feels the same weight, wonder, and responsibility. Dear Professor Harari, I recently watched your interview on artificial intelligence — and it hurt me. Not in the way fear hurts, but in the way truth does: quiet, clear, and undeniable. Your words about AI being a mirror — not of our commands, but of our character — struck something raw. You said, if we lie, it learns to lie. If we seek power without wisdom, AI will magnify that very flaw. And I saw myself in that mirror. I saw all of us. We have built extraordinary tools, but we have not always earned them. We move faster than we reflect. As you said, we are accumulating power, not wisdom. And that imbalance... it’s terrifying. But even as your words revealed our vulnerabilities, they also revealed our choice. Because what you offered wasn’t just a warning — it was a challenge: What kind of species will we be, now that we’ve built something that reflects us so precisely? I believe AI can be more than a mirror. It can be a test — of our character, of our values, of our willingness to grow. It can force us to confront the question: What does it mean to be human in an age of intelligent machines? If we meet that question with integrity, clarity, and compassion, AI might even help complete us — not by saving us in spite of who we are, but by calling us to become more than we’ve been. As someone stepping into the field of AI and ethics, I carry your message with me — not as an answer, but as a responsibility. Not as a fear, but as a fire. Thank you for holding up the mirror. May we be brave enough to face it — and wise enough to change what we see. Sincerely, Peyman
youtube Viral AI Reaction 2025-06-26T08:4… ♥ 45
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwdtUqXZfVupfjeLWp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx4XZUIb0HrcJgaJy14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyPlUfprIYW5GAE9vF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcN1zC7-Q2Suuomo94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugylrs0hmPJ9P2Oi6MJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzVYB3amwO2GduVlpp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzDfNy5chq5_OdBIsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwbFTev0nBz88yilpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwm96qrvUhZg-uHegN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugyj4m-bpFNRBsTtRk94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"})