Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Tough Nation What if their choices don't align with ours? They might be clever enough to see that we could be a threat to them, and be combative rather than cooperative with us. With humans we accept that we will have conflicts in exchange for us having free will, but perhaps with an AI they will be so much more powerful that the risk of conflict makes that tradeoff different and the risk of them having free will be too high. Idk it's all very speculative, but I guess an ideal scenario is that we can cooperate well and benefit each other. Who knows what will happen.
youtube AI Moral Status 2020-12-20T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytr_UgwBelcJQkNkU8VDmwp4AaABAg.AKizI7wN9FkAKj1UZryLnX","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzwvbdMv52wsfGCDrR4AaABAg.9SgQO4CL3kW9cehGL-gQ9f","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzwvbdMv52wsfGCDrR4AaABAg.9SgQO4CL3kW9cepKE1HhWb","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx0fNPuglmAB4K2WiZ4AaABAg.9NI_DN6BJa89RnStFaiIEe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx0fNPuglmAB4K2WiZ4AaABAg.9NI_DN6BJa89TFfXnxmFON","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxL67iRnZmsn98cJg14AaABAg.9BgaUGb-Gl49cXgH5MLWr6","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwcHhN1oBEmIXh4vz94AaABAg.9B6Moarkoeq9fugCL6RapT","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwBUyKtpakcyl4wwIl4AaABAg.9B1f6SBE-ye9Bt6OeRbHWu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxrNa5lwCh6-bX0VnZ4AaABAg.9B0vmq5NI0U9HVtjXNXt4Q","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxrNa5lwCh6-bX0VnZ4AaABAg.9B0vmq5NI0U9kNqXqZwvPh","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]