Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wouldn't trust elected politicians more than unelected technology leaders. I don't understand A.I. in depth. I can see that there are huge potential dangers and it seems like we've been alerted that the employment market might be decimated by A.I. within a few years. Maybe the situation is currently far more advanced than merely being a threat to the global economy. What kind of technology will A.I. be able to develop. Who gets to own, control and benefit from it. What has already been developed. I wonder how advanced some countries like Russia, China and other countries are in their A.I. tech. I would be confident that they're in the lead and certainly in the game at the very least. A moratorium will serve as a Public Service Announcement and little more. I have perhaps ignorantly been excited for the advent of meaningful A.I. in the hope that it facilitates the ushering in of an age of abundance that Musk has claimed we're on the brink of. Advancement could be positive and negative in different ways. Personally my view of the world is largely pretty bleak. Governments do not serve the public. Capitalism does not shine on everyone. If A.I. is also a net negative, then ok, add it to the list. Is the Paperclip example really any worse than what already happens in reality. We're already propagandized, enslaved by the cost of living, poisoned by a myriad of different things. It seems to me that Governments and corporations have had immense power and I personally am not pleased with the results, and with that I feel it worth the risk of A.I. potentially being our overlords.
youtube AI Governance 2023-03-30T10:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzUvcNmWdhebmM0rq14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjJ21UIW-N-gWtIFZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzL-60u0YDiyy3C0Kp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQuu1agpBnCarU-WV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyEr-3Ej1EhBGPV4dh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz_ExjirZyC-Mmo8sl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzEqeUGizeI9hgIB2Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyo0diA4kxO18Ucril4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9UNNZe6ashXzPg114AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy0t7u8ncodnNvS6hR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]