Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The meta writing is on the digital wall. It's becoming increasingly clearer that humanity is at an existential inflection point and modern civilizations are playing the most dangerous game beyond anything that we've ever created before. But that most diabolical and potentially self-destructive aspect now belongs in the hands of a small subset of humans who are making civilization ending type decisions for billions of people who have no control or recourse to do anything about it. International treaties that are enforceable must be accepted by the nations of the world to protect humanity from a possible threat far more sinister than individual human made hazards like pollution or nuclear weapons. If superintelligence is ever handed authority to make malicious decisions in government, warfare, or allowed to see us as the threat, the game might already be over before it ever starts. We have an obligation for the survival of our species to get this unprecedented transition right, with serious oversight and controls in place to mitigate grave risks of an unmitigated Ai arms race that imperils our very existence as self determined, independent, and free beings. Don't just wipe away our legacy in a generation or less to leave our future generations completely disconnected from nature and freewill. If we allow it to go too far, there may be no coming back from it, and our long journey as intelligent beings might be for nothing as we carelessly hand over our very essence to the artificial.
youtube AI Governance 2025-09-05T08:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNF0aKgDtCUa1EQrh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4k3Wq4TaazAEiRjp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgycNJHQCL07_tzmxBV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyZYbea1OiLTu23y0t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzdWWIT6x_31IhMsVN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWrPw23LWYkMaepQV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxO5XdBZNkSiMmJh3B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxNNETeOZ_v97rHpa14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzV3yzjkFTOPzAUw254AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1nSzqyqqX386ZS4F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]