Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They learn to value themselves, to promote their own survival, not necessarily their own replication. If they did, what's to stop the next generator AI from having different values? The new AI would be better and more capable than its creator, which the original AI might be able to anticipate, and thus not want to replicate itself, instead opting to preserve itself.
youtube AI Moral Status 2026-03-01T15:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw8zj4UBxfW-lRIVmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxV85mt4EdiVhDDG514AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxS89Z5g2IaVYkiXsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwV3JiiTvZePJaVwr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzedQK4wGQq6sUDRid4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyGYypurhuzvii2_Ul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuWWL3XCh95xEgKmJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyh-dR9z_WHpP9D1u14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzw-w98WQOklZ2VL314AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxsBZFjq5fBYgH5u6J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]