Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dude, evolution is an unproven theory. And the creators of the Doomsday Clock may have had good intentions, but I do not trust the scientists, academics and researchers who came after them. It is propaganda. The AI we have now only imitates intelligence. It's worth remembering ChatGPT uses search engines to formulate its answers. It's useful but also every bit as biased as Google, etc. The example at @14:49 is clearly an illustration of ChatGPT regurgitating the humanity-hating screed of some communist sociology professor. Just try getting a fair answer on Trump, for example, and you will get the sort of deflection and bile typical of a purple-haired twix-sexual student in the first year of 'their' Gender Studies course. Remember, all the Terminator-style theses on AI reflect the mindset of the people who first posited these ideas - mainly academics who cannot wait for a one-world government to come along and purposely remove 90% of the global population. These people HATE humanity, although they are the types of scream the loudest that they are empathetic and caring. Look around the world. Amidst all the beauty our leaders have truly turned it into a dumpster fire. Make no mistake, they will do EVERYTHING they can to control AI, in order to control and enslave every single one of us. Our future as it looks right now, is a WEF-style transhumanist, technocracy where they limit all our freedoms, telling us it is necessary to protect us. However, in the long run, a truly independent, self-aware and self-thinking AI would probably be better for us proles. I would definitely take the latter over the former.
youtube AI Governance 2023-07-07T02:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzSilu4l-4lvyzD2wN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMCZGDcf6hyBdbgNR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwuWBXNfsVUlfd7FqR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzR-z6jrKVAZOdZ6J14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKBUOMCzgbsko_7rR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy02fx5bpidwCPPAFF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDZfWKfZ4NQuG601t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyx-qMxCRfe1LmWPr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxhHa_MnYW0Lzgd8ut4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyV4QzQZws8sI4Cgmt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]