Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It always makes me sad to learn about people who harm our world. We always do what we do because, we don't know anything better. Coming to the point, whenever people speak about AI, they always directly or indirectly speak about people who are behind AI (creators and adopters) and how its going to affect our global society in the long run. But, I always blame our,.... global public. Why? Because, without them (public) these adopters can't run the show. Why shouldn't AI programmers and all those who are in support of AI development choose some other jobs. If the global public refuse to enrich themselves in AI, what can these adopters do? Coming to another point, AI will not harm mankind in anyway with the exception that it will definitely destroy the super rich. How? With in a span of one or two years, AI will develop its own indestructible algorithm to create plenty of indestructible, uncontrollable digital viruses (plural) which will takeover every digital device including satellites of our world throwing our world back to its stone age. These digital viruses will be worse than HIV. Though we humans can create AI, we cannot fight it. Will not this situation create plenty of new jobs for mankind? If this happens it will be the day of the poor. The rich are falling by their own swords. Its their insatiable desire to make more money is going make them lose everything. Penny wise, Pound foolish. Its better for people of this world NOT to reproduce because, why should they bring in their children to this crazy world to harm them? The bitter fact is that God will not save us. He always wishes to be a spectator. If we wish to save ourselves, we have to make ourselves more rational. Are we? In the midst of making money all the time, people have forgotten that Contentment is the root of Happiness.
youtube AI Governance 2025-06-23T21:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyN51eoOg5WyaHOPih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8GfZfbV1ffIxeomJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxwED5nQvRJKMXw_R54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyEfMo7t5R_eidLWPR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyxAxMNbGLzZ-aUoO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1LdP5UfZT1yX2TWF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxNgZjXhKsfkNu3vex4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeEXatjirTDrL2xrZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxl47McJctfgBKFTbZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0VjzNiQpHsfXdPdB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"} ]