Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, there's no necessity for AI to have emotions analogous to humans to be agentic and dangerous. This is a really bad anthropomorphism.
youtube AI Governance 2025-06-21T10:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugxt-b8135SNcbeRe6V4AaABAg.AJcqi5HC7QCAJcr0xUW-Xl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgynUiCZxz0aNIrz7dp4AaABAg.AJcc8M7T6soAJcd8J1a3Ga","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzQBReemd161WNDw4N4AaABAg.AJcac1WpxkzAJceBzzyyru","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxvDSP4TSzc6gqUB_R4AaABAg.AJcUKoA-XvEAJcg1DYcKqf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxvDSP4TSzc6gqUB_R4AaABAg.AJcUKoA-XvEAJdcfv19_WB","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzJJS3BE0gSHd6PK2V4AaABAg.AJcRS_eT6jpAJcU0cQLa_A","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzJJS3BE0gSHd6PK2V4AaABAg.AJcRS_eT6jpAJcVQDjsQ6B","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgzgGfkzLPJ0_lyA3L94AaABAg.AJcQIqw9EI8AJchRQgKAkl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyrUZg1XR6UbP08uv94AaABAg.AJcOu5gb49MAJf5YauqfFN","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw2EsPDs-YThs7CDvB4AaABAg.AJc6jTcnqjUAJc7isR4iuo","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]