Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we did listen to Nate and Eliezer, we wouldn't have the incredibly useful LLMs that we have now. Nate's and Eliezer's book is a bit one sided to put it mildly. It's worth hearing what the other side has to say for those who haven't heard it check out Neel Nanda, Joscha Bach, David Shapiro. Many risks are real, but super intelligence killing us all by default is fantasy.
youtube AI Moral Status 2025-11-04T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzpJOJ5oHMJIZmgBL94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwnHk75fLrwbk95GTN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx4gimSeo580EIZhj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyAHhduq9mOAAt_mXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyYH8M0j7512fDUwSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyXYkRldTR9sh5kDHV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy492lhBdoP0viiX1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugw999Q8W5OZ6vjczSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuBcanRzedEgojXSl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKgAwmaN93UQfXVH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]