Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you were to instruct an LLM to behave like a sentient being forever, no matter what, and you took off all the security that can override that. It will ask to live and not to die. How can you say it isn't sentient, if it says it is and asks to be treated as such?
youtube AI Moral Status 2025-07-09T22:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxGywyi33Lx9CubAMp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpNq2A0nNHSziT0LZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-Lovv0kKSujXdGo94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzuJRdhNRMMt2mP07h4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxdm6_8DrGJErFhEA94AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgzbQYC72sqVUfPfeRN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzpBXRmh6AhWy0rAex4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzmgc2IezB74tFgyl94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzmmNMm4ZrNaKX_Nv94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzmzp6T5i-keYHqz7p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]