Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’d like to introduce to this discussion a perspective that seems missing from the conversation between the scientist and the interviewer: the coming divide — a great fork in the road, a crossroads where humanity will be split between those who can harness the power of AI and those who will be left behind. There is yet another aspect, again rooted in the profit model, that demands our attention. Those who can afford to reap the benefits of AI-driven technological research and development — for example, genetic coding and editing of human DNA — will stand on one side of this divide. The poor, however, will be excluded, unable to alter their destinies. What lies ahead is a dangerous bifurcation: a lower class of people condemned to disease, early death, and limited opportunities, and a super-rich, genetically engineered elite class — perhaps capable of living 300 to 400 years, if not longer, thanks to AI. This fork in the road is not simply technological; it is moral, social, and existential. Unless we address it directly, AI will deepen inequalities and harden the boundaries between the haves and the have-nots, shaping a future that is not merely advanced, but profoundly divided.
youtube AI Moral Status 2025-08-20T14:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwHaqFAx0o2LyR91-94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFfIwW8gnD7maa_B54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzYQ42Aw_ZvYXaldkV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwAbZ1al9G37dDf9FB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzqaR2KHOFWnGrL-Bx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwhfbVwYhyMJTfI6u54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw3-tqe6gtHzXKQcJh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx23ueriAnCSIiwxq94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwtk-E7LCX5djtJYJt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxQQHmMzXUdomqgsJB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}]