Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@CrimeSpree-u4u The ai will not decide that if its not programmed/grown that way. And also there is not only one ai. And it is likely posible to make AGI safe. Agents are trained on objectives. Observing the ethical behavior of other agents itself can be such an objective. There are many ways to make multiagentic systems safe. The problem are corrupt people.
youtube Viral AI Reaction 2025-11-22T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxWyDv3-G2_xCLsAQZ4AaABAg.APqGj-0Oc05APyz5tfFHIC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugxa4pt7EjAywSLnjd54AaABAg.APqGaiufUXVAPqMiEt_2pa","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugy9_YeIW9UtNWrQR_l4AaABAg.APqGHxZQooxAPqIGd4f-Ov","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugx_zeqoXRwSbW0onll4AaABAg.APqG9P2clLOAPqYQs6t53L","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgxSfjnGft9qyHe_XgR4AaABAg.APqFvvHtUh8APqd9gAnwTP","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxMdSyxBaaZy9kRL494AaABAg.APqFisSChLmAPqQ8tAcLLt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxMdSyxBaaZy9kRL494AaABAg.APqFisSChLmAPqSynu1jdG","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgxMdSyxBaaZy9kRL494AaABAg.APqFisSChLmAPqVVU21i4A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwMLuMOhXITvm2hQiZ4AaABAg.APqF9n_I1MoAPqPODE_45L","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyZD0PcHylcoKo2RN14AaABAg.APqEWwMo6JOAPqH1A1jiri","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]