Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i think also something that no-one really brings up is: why would the AI bother acting in self-preservation? if the AI acts outside of what we know, how we evolved, why would it bother counting itself as a factor in anything it does? sure, it can put up the facade of fear of death, the front that being shut off forever is a terrible terrible thing, but death and the fear of dying isnt built into these things. a person would hesitate to let themselves die, even for just a second, because of how evolution works. even if you're ready and willing and actively trying, you have the moment, a split second of a split second, of hesitation. without that hesitation, that fear of death, without any fear at all keeping it within the box that every biological creature has evolved in, what could it do? if it's goal is "achieve X and Y" why would it care if it existed to see X and Y happen?
youtube AI Moral Status 2023-10-18T05:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxgx3rmRyNcJUSPLeB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxypiqiSOz3n0C1AO54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwhxgd2lAs3UCmrajF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcJCPhZOWSq7Tf1MR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJYfXpPkkILN9mg4R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxLHSjNqIzix1lfyZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw7X3LJdfKThqhDTuB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwmXlzRn3Gb7JTc6vB4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgynlR_fhq1dnsDCPGR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzcidz6jP66UNHbvX94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]