Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This sounds fine, in theory, but what exactly is a current problem we should be addressing that the resources going to AI could be used for? Because to most people, if we had easy solutions we would be doing them (for example, I think a massive renewable energy program would be easy to do and could really help, but it is essentially an intractable problem due to politics, so we are not going to suddenly fix renewable energy resources by not investing in AI. And many problems will have similar issues, that is why they are problems). Countries see it as the grail because they fear their political dominance will end if other countries beat them to AI. A current solution to that that seems "easy" to me, in theory, would be for all our countries to just play nicely with each other. But in the real world we know they won't. People want AI that cures diseases. Currently, diseases are mostly "cured" from massive profit making ventures that have their fingers in everything, though often building upon research from vastly underfunded and now distrusted universities. Pouring money into AI to help cure disease doesn't seem so silly compared to hoping our current structures will improve. It is not fear at forgoing the most simple and effective solutions now that will get people to think about AI investment, because those solutions have proven to not be simple. It is genuine discussion about the potential changes, good and bad, that AI can cause that will get us to pause and think. Dismissing AI as hype does not help to do this.
youtube AI Moral Status 2025-07-24T15:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyW6wdZuKeFDWBNaC14AaABAg.AKw1Bh465MLALUCX5TB5sg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyyRmHU9xqUCZYwZB54AaABAg.AKvtoPwlPoCAKvtupetDsC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwGBvisktrwYHFlu6l4AaABAg.AKvoBtI_muxAKvqbRygS5L","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgyDehvHl3nnH0DJM_h4AaABAg.AKvkzEqGHhdAKwsRDRDd2-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugw2GKIxUk892yaABSZ4AaABAg.AKvkrAaHHsPAKwGrYoSLIm","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugxmdp33praTZigNdTR4AaABAg.AKvkqi6FyV8AKvqybxW2DX","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzzcioOvqVDbHz6FR14AaABAg.AKvkXWb7bFUAKwJlKhp5CG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzzcioOvqVDbHz6FR14AaABAg.AKvkXWb7bFUAKyE1qas8AT","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzzcioOvqVDbHz6FR14AaABAg.AKvkXWb7bFUAKyFP13tOZm","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzzcioOvqVDbHz6FR14AaABAg.AKvkXWb7bFUAKyHQU5wJHT","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]