Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI doesn't seem scarry at all. What scares me are people who will abuse it. Hacks, cheats and bots make insane profit nowadays. To put it in perspective by one of examples. To use a cheat in game you'd have to pay 50$ a week. And there is a waiting line. Another example, a 12 year old who know how to make online purchase can order a DDoS attack just about on anyone, priced depending on the target. Now that is where it becomes scarry as hell. Next to these exploiters, AI itself will look like saint. And there is no absolute defense against hackers. Even when I hear that quantum computers have some insane security. But then, who is trying to crack quantum computer security nowadays, nobody does. Where you have a who knows how many hackers out there, in a dark industry that's booming with money, attracting more people in age of technology where we are all ever more technology oriented beings. If you ask me. We should give up on building AI, for now. But I am not mad about it. Besides it's to late. We already started progress and now there will be always someone who can continue even if you outlaw it. And all that said, I didn't even mention how will people react to non biological lifeform. Historically, there is gonna be hate. You know where that one's going...
youtube AI Governance 2023-04-18T10:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwfLP0cUJyzkNmv95x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxI1kCIu41A4GS67T94AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzS_pEd-qLuIj9j1Pt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyIurwijJ3I3sdYy_R4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwTP8IU24uhNtIrTul4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwIB0UmPzM6eU0L1094AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQxMdl2sZHmQkuKel4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyxTjmRqqfcjqQRq1d4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyEswdxkPkOXCP_RbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxIeOv6Q3sR66RW7IB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]