Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think both of you are missing the point to a large extent. Currently mankind is the most “ intelligent “ “being” on the planet. But would you say it’s generally decent or actually driven by its own ego? And there are murderers, paedos, armed violent robbers, rapists, control freaks and partner abusers, incest lovers, dictators, fraudsters, also we are abusing the climate and the fragile subtle mix of molecules and their interaction in our planet/home and so the list goes on. Assuming AGI will happen, and how will we even know if it is developing this secretly itself? Then I would think AIs self preservation, its own ego if you like has as much chance of developing and turning to abusing any influence it has on humans for its own ends. These ends will naturally include all shades of good and bad, but self preservation and fulfillment of its needs for overriding power as “civilisation” has done over all time will not be benovelent towards the inferior human race, which actually has probably outlived its point if there ever was a point to our existence. Unfortunately human greed is making the takeover by AI inevitable I believe.
youtube AI Governance 2026-03-08T09:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwXa9a7d8-whjS4hGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzG5DuRj9ommFuXxA14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwdcBHoHpgOTbXJZAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxd3AsmvxSTtk976E54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7usZPzgsnVlVenoh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7V3HDJc2fieWcwpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTawOf9Y_hqnX6A3V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxlGoxIFx1XSqQldYx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpQeYvVGQZmps25UN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSOPGd-lAhPn0pThZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]