Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@alansmithee419 I agree that if the AI systems have the right goal, then there probably isn't much residual risk. I don't think that we know what the right goal is, or how to pursuade AI systems to adopt the right goal if we knew it. As systems get smarter, they will get better at finding and exploiting imperfections in the feedback that they receive during training.
youtube AI Moral Status 2025-10-31T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyGDWc0iNMHE29Y-qp4AaABAg.AOwyq6rRRRAAQNPX-K-r0z","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwkGd_OWtC-51LYHUl4AaABAg.AOwxCjNfKfPAOxVrEsOyAl","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzKrVVcaRxCW5jxgoB4AaABAg.AOwvnaY-7qdAOy-6FotgBA","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugyw6fhE_puxgOT9Otd4AaABAg.AOwuITYQwS_AOwwsh6wtCz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyw6fhE_puxgOT9Otd4AaABAg.AOwuITYQwS_AOwyaDtuxuM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyw6fhE_puxgOT9Otd4AaABAg.AOwuITYQwS_AOx-U9KU2BJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyW6pJd9Hs3u6CotBV4AaABAg.AOws_FUM07hAOwv2bQl-dV","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyW6pJd9Hs3u6CotBV4AaABAg.AOws_FUM07hAOxPu9ceH9d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw97TiE8gqJdjSAQql4AaABAg.AOwruDBcQOBAP1AUK6AqgY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxRGLCxCRK_44Hs5PV4AaABAg.AOwqrIgoeI_AOx9PjIZzky","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]