Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The machine that does it's task in unexpected ways. Imagine an AI is tasked with making sure that there's always enough electricity for all computers to run. That's a task somebody might give to an AI, right? The problem arises as soon as the AI analyses the task. Humans might switch off the AI or electricity. This would mean that humans are a potential threat to the task. It would be quite obvious what this AI would do to achieve the task: Eliminate the threat. And there's no consciousness needed for that. If anything, an AI gaining consciousness might be the only thing that can prevent doom for mankind by bad tasks.
youtube AI Moral Status 2025-06-05T12:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzQvxPqRmOkR8pjKSR4AaABAg.AIzW-g79N1PAIzkGwLvvO8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-LJm1gj2O","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-a0U8e9Io","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-bwHYxwhe","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugz6-lP11ILim4iOur14AaABAg.AIzSHACWShRAIzoZkWOr08","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugz6-lP11ILim4iOur14AaABAg.AIzSHACWShRAJ-vCYmHO3o","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzXVLfzpkP17SFOjCp4AaABAg.AIzCqcGf2TtAJ05kl4SjUx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzI4u1Atv3g_GmI-nJ4AaABAg.AIzCJLtSkOqAIzszx_hAVt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyvWWOfbUU8kM3TVCV4AaABAg.AIyqhvkZBYOAIzmcGVwfaH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytr_UgxMKFs0jNz_uGGZc7p4AaABAg.AIypU0PbKZLAIz9KOQz0IZ","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]