Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@lomiification You have a misunderstanding of what alignment means in this context. An AI that makes deepfakes is not aligned or misaligned in any meaningful sense, it's very simple compared to the AI models that people are concerned might be misaligned. An AI that makes fake videos of people committing crimes without the user asking isn't better than an AI that will do it only when asked, it's just less useful. Whether or not an AI can do what you ask is about its capabilities, which is a separate thing. Alignment instead refers to whether an AI will try to do what humans want it to, almost always in the context of general AI models like LLMs. Unfortunately, nearly all of the money and research in the field is going towards capabilities research, making AI smarter and better at doing things. There's a lot of research on making AI better at biology for example, but not a lot on ensuring that when we ask for a cure for cancer it doesn't give us a deadly virus instead.
youtube AI Moral Status 2025-10-31T04:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugwf2zF_xWkRggRi-X94AaABAg.AOw2A9e-onHAOxT_sXTEu7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxZm2WJibEPTyCvE1x4AaABAg.AOvxD8jYdhlAOw3q8hoOdJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugy2k2xFGP9gDYywYKh4AaABAg.AOvs7JHmu_wAOvvaaNQJ-N","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugz6Pt_A9K6iBockeqF4AaABAg.AOvrDnD-1uCARjwhVb1Ij_","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz-_lMNf5m98fTgUux4AaABAg.AOvn8jkMIP7AOvrjV8GlR2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxNSpsc9xXpxxv-FSF4AaABAg.AOvmjoUJUYxAOvtcKqB6gx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxaZLBfKqrXIvI_dMt4AaABAg.AOvlsJO3MsaAOvptoJvbBS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxaZLBfKqrXIvI_dMt4AaABAg.AOvlsJO3MsaAOvv0cOJ1B6","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxdGAOj0gQkipSK2Ml4AaABAg.AOvlcnF6hFlAOw-xfix-wo","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugzhzx6dO_u1tTU8ZIp4AaABAg.AOvlLnMfxZzAOvrfXGCOu6","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]