Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with telling AI to work for the good of mankind is that an advanced AI might not find that having billionaires is in the interests of most of humanity, and the billionaires creating AI don't want it to decide that. Also, if some of those people creating AI are accelerationists who think "hardship is good for humanity," they may want AI to CAUSE problems
youtube AI Moral Status 2025-11-03T05:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzn_IKre8Q3Ac-ZgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOlJH3MRZNJZs6Sap4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzWoSgkoR5BxrplSTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxYTOoUHXnHZz6_Hht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvUDlGQfN8ZzJJWwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMgySiEz2yhF51O854AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwFE-FHa_sLG-vXkg14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzFQ_vWNd2gyl7XkFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgySVE2ZBVNUJ9ALttl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwXywL2CE5FZbPOO954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]