Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the suggestion that everyone has access to GPT-7 and therefore is on an equal footing is just ludicrous. It's like, what can you do with that to create economic value in a world where some people have access to GPT-7 but some people who have more money have access to even better models that can do more? Those without means and without an existing advantage I think will be perpetually behind. It will mimic the existing system in a sense that there is opportunity for those with high agency to sort of break out of that, but I think it's going to become a lot more magnified and on a much larger scale. This is definitely going to be a wealth concentration thing, regrettably. And I think AI is going to force that because the speed with which this is going to replace jobs will far surpass the ability of society to come up with new jobs and the ability to make people useful. People draw parallels to other things, like when we invented machinery and agriculture and stuff. The speed of disruption of that is not even remotely close to this, and this is the first time in human history that we've had something that will potentially be smarter than humans. I know there's a lot of question marks around that, but I think it is a foolish to bet against an exponential curve. Hopefully it doesn't become smarter than us honestly because I don't think we're going down a path that is actually beneficial. Everyone speaks about the benefits in terms of medical breakthroughs and everything, but even though I use AI a lot and I'm a fan in a way, I don't think that the benefits are going to be net positive for society because of how much it's going to disrupt the social structure and fabric of the world. I really hope that I'm wrong about all of this, but I obviously don't think that I am.
youtube AI Moral Status 2025-07-28T10:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyC68EWigFngESqT0x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZwLwD3UbKE2zvsG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXf0AfYGO4_I0cirt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyo6gmCa5mSNv-cwIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzRfZu6rGTtGcgKhlJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwNIseOfwhzehEkduB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy3vFG7RLMuTt98Qxp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxmurr5-5x-Xs36EKN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugxcgtw4pcwcvEROHnV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw544y2CY3YO544nId4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]