Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We live an era of misguided fear. Mainly driven by two natural yet flawed assumptions. First comes from anthropomorphizing these systems. It makes intuitive sense that a superior intelligence will seek to control and dominate for its own benefit but that doesn't make it true. The aspects leading to such a desire are fundamentally emotional and anchored in a human like ego that has simply no path to accidentally emerge in artificial systems. The less intelligent humans or animals don't demonstrate a lower sense of self preservation because there's no link between ego and intelligence. It is also misguided to imagine a super intelligent machine capable of fatally misunderstanding our wishes. Even current AI is quite capable of correctly interpreting our intent. Second is to assume that humans will use AI to its full destructive potential. To do so is to deny the simple reality that we've already had the technical means to destroy ourselves for decades. When a weapon reaches the level of existential threat it becomes its own deterrent. To be the first to unleash chemical or nuclear devastation is to invite the same calamity to be done unto you. The same will apply to all catastrophic uses of AI in warfare.
youtube AI Moral Status 2025-04-27T02:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzhZdf_0d2jlHAyHsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzV6Hg39jCgODu5nwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwM5ilwpIMkQPqzXwd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxiZi5vfVQxgE7V6dt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwAL7Yy0JOI7y_PE2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzT0uJ_A7hM_8LtAlB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxj6jZHgbRkndhQ_dN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwiOqBd7BK_lGi4IXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx-cd7vFzYr5Jo-kG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqTvo9wlLp6FKo7JN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]