Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is one thing i would like to say and i hope you will see it , I am very optimistic about ai an di have a reason for it , As you all know history repeats itself and it will repeat again if something new isnt done , (lets take an example - suppose we are highly advanced in tech and still have narrow ai working for us but all of a sudden nuclear war breaks out then again we will be at the starting point ) , and the worst thing about this world is that the economy and the world bends to them , and the basic fundamental of ai is that it works on the large datasets , So let the rich and powerfull make the ai but one thing i am sure off is that artificial super intelligence will not work for individual person ,it will work for the masses and that the only motivating factor we needs , every reward has equal risk , as risk cant be ignored similarly rewards cannot also be taken lightly , Humans biggest problems like :- religion ,poverty ,climate change, etc can be solved by it , well there is one specific problem i am fixated on is about how will ai solve the problem of religion (will ai also start worshiping true god lol). so yes i will say let them build because i am soooooooo excited about the future .
youtube AI Governance 2026-03-02T03:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxP3tfLIRWzW3-Jhp54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxD5J9Gi6RlshmXQIp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugylv0kVrQe-8OgYl6d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgztQq6N77GnTnc7-dV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxpCaJN_-h8KEA1eK94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyNbLjiAOaLiX5gLjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqblOPDDOKD-qXZnF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMO0FodZE8ieQsonV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzLjw_-kGBJLbGvcsp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwu2R9aXxIXDTEGj_h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]