Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Ethics of AI: What We Choose to Create In a world where artificial intelligence is evolving faster than society can respond, one question matters more than ever: What kind of future are we building — and who gets to decide what’s right? The Ethics of AI: What We Choose to Create is a powerful exploration of the moral crossroads we now face. From algorithmic bias and surveillance capitalism to synthetic empathy, environmental cost, and the illusion of machine neutrality, this book reveals the hidden forces shaping your life through intelligent systems. Whether you're a technologist, policymaker, entrepreneur, student, or simply a curious mind, you'll gain deep insight into: What ethical AI really means How bias, privacy, and consent are often violated invisibly Why accountability and transparency are vanishing from digital systems The growing divide between those who build AI and those who live under it How spiritual ideas like consciousness, responsibility, and creation intersect with modern tech And why writing code without conscience may be the greatest danger of all Grounded in cutting-edge research, infused with philosophical urgency, and featuring reflections inspired by The Hidden Dimension and the Manifesto of Half Dimensional Theism, this book isn’t just about machines. It’s about us — and what kind of creators we dare to be. https://a.co/d/cSPxosU FREE on Kindle Unlimited]
youtube AI Moral Status 2025-05-06T10:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwBbAVg_o23bCm_oxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzQn6DZruvN8_PW-h14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAsmlACFUcwG9oRtN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwddKY5h6Wf9kmOKE94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxc8Hq0OefLyw17e0t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugze8mmh6kl5mMIwlyl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwMGz5-HmP_7TMwdT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4iRJdJbYN2IsKheZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0SShv9aZxuPlY2lR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzohne0Ml9i1hJ8-zd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"})