Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yudkowsky larger macro argument that AI will develop a purpose or goal of its own such as apocalyptic destruction is compelling intuitively. However when Ezra asks him to show the steps in his anology his explanation is filled with non sequitur steps. This happens In the books as well.
youtube AI Governance 2025-10-15T12:3… ♥ 9
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwtB5eVQy7lPIc734x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwyB3uNCIehNCPHFOt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMq5-XX6H6tldZWIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgziqkC0YbHhU5hOuQJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDi-5UnYK-EgNK6xp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzKn-gld_zenq4huw94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8kFyjDinu74qFJDd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx_MASt11VvovOmhJB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzIM0GnXuwVkN65Ln14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyD0BTu0ysPX4hosyp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]