Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Quite honestly AI ending humanity is a miniscule risk when compared to the risks of AI supplanting the need for humans to do any kind of labor for money. There is no question that AI will become more advanced than humans when it comes to intelligence level, but AI will not have to actively work to cause chaos in human society; it's mere existance and ability to work smarter, faster, and cheaper than humans will itself create an envirnment that is very unstable. With regard to stopping or slowing AI development, this is a fools errand. If the US or even all western contries stop AI development, this will not stop Russia or China, in fact it will only help them get ahead. Edward Teller said that his great achievement was not that he invented the hydrogen bomb, others would have and others did, his achievement was that he advocated for it be created. He understood that if it was possible to do it, it must be done to know of it's possibility, otherwise advisaries could do it first and have an upper hand that nobody else knows is even possible for sure.
youtube AI Governance 2023-07-07T14:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw1HWjtEKJVk0WesGV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwtduOYBrkxXaO5cel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxA9IKk__nMAtuv_Zl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyxIsIIvIrG447emjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzsvXdqlCdkhyQL-oZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwx9utrvoFxs8CeO014AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxtC8td0Onwam4okhB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw5kT3-QpYVoI8CcZ94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwp7u2Q1_X8MiBC1d94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwBBnM3HHSVqL9_6bB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]