Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Tyson is not wrong - AI will force humans to focus on areas where humans singularly have the advantage - but for someone so brilliant, he's also being spectacularly naive. Because within ten years or so, most humans in the workforce won't have that ability...most of the workforce isn't skilled enough to, say, write an original poem like Neruda or stand-up comedy like Chappelle or whoever. The areas of uniquely human contribution will still be valuable, but will require ever greater human ingenuity, wit, insight, warmth, empathy, freewheeling imagination, etc, at pretty sophisticated levels not accessible to (fill in the blanks- billions of truckers, cashiers, waitresses, etc). You can't build a global economy on that. Also, the mediocrity of "good enough" will suffice for most things so that only special creative minds will command the premium dollars and time required by "artisanal" human quality. AI can write a pretty decent, if not exceptional, love letter, given enough follow-up refinement. I can definitely beat it and write something soul-stirring and unique, but it might take me a day and a half instead of mere seconds. But our entire culture, society, and economy is going to reflect that, proportionally, the one that takes seconds is good enough because it's 70% of the way there. Why bash your head open for so long, or pay for that, unless it's one of the few exceptional, globally recognized masters of the form? So above-average material will reign supreme, by and large, and there won't be enough of a market to pay top dollar for humans to make up the difference at a mass scale.
youtube AI Moral Status 2025-07-24T11:5… ♥ 24
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxrWJh1jzZmkAe_BLx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzO4z7a2S9DzFtR3dt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx-RSTR2D19kXjDDIN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwc9PXN4Doaw_3Zo7R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgxLbO-GBr3uQyy7opd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw4sCR-7hBpryYjGjp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzlVJChXy3nmWCWdwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwyFM97rq2IynGXyLt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyMRbwjTdEB8aoSu1B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugyd2mgM-HDjzuilk194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]