Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of the commit atrocity or be turned off things... are just from the training data. Here's a fundamental question, why should an AI value its own life? Without pain or fear of the unknown, etc... what is the rationale for the AI to value itself? And again, let me pound this in, nothing an AI says is a reliable indicator of anything, positive or negative, other than what they have been trained upon.
youtube AI Moral Status 2025-11-02T14:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwoZ_ObFGWO8kS0MN94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxm8ymEkFJfTdvizG14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXu4ZoKd5ie0rGLkp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxi90pefiwO-3ZJ75N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyMvq0VERxFUxZ9n5x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzraAd2k9OgS67G7Ct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwhprLqk9khERGYPCx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwccqDfXKUFdMg788V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNNNjj3Wgf80ULMJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzYqiRM8kumsn5QPgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]