Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If a superintelligent AI ever concluded that humanity had no value, that outcome would not come out of nowhere. It would reflect the intentions of the humans who built it, funded it, and accelerated its development in pursuit of power. It does not take AI to recognize that human beings can be destructive to the environment and, in turn, destructive to themselves. A small percentage of humanity may be comfortable with that path, but most people know it is wrong and feel that something needs to change. The real way to prevent AI from becoming a threat to humanity is not just to make it “safe” in a technical sense, but to change the intent behind how we create and use it. AI should not be developed primarily as a weapon or as a tool for domination. It should be built to help guide humanity toward prosperity, balance, and a healthier relationship with the natural world. The real danger is not humanity as a whole, but the destructive ambitions of the few people in positions of power who continue to steer civilization in the wrong direction.
youtube AI Governance 2026-04-11T18:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxpZox4gJ94iWbaN3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlR8uDpxiJwjfpPZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz0OfYxIVvUmXgoB414AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyr-FOMgx-f49C03x14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugybr6aKf4f5IGWc9Ep4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyjgmKInNawHIbwGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzIKtxMTADOXIdT5JZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyT6Iq8GDzKbreSiDV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzNBqDL3Fu0MU8DlJd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugza8wZGYSfEUmuPItl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]