Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To all people in the comment section, especially the people watching the video, I understand the concerns that AI brings about. Given its advancement in recent years like OpenAI's ChatGPT, I am sure it is all impressive stuff, but what I have long argued, and what I do fully perpetuate, is that having AI replacing every single job in existence, is very reckless, and tends to cause more harm than good. And God forbid we ever tie an AI to a missile silo. That's why there should be red lines that should not be crossed when it comes to using artificial intelligence in general. At best, only most areas could be replaced with AI, but with the likes of cooking, policy enforcement (looking directly at you Google, DeviantArt, and YouTube), and certain other fields, are better off not being used for artificial intelligence. At best, with the likes of editing and creating source code, they can be an exception, but by all means, they can make creating source code more efficient, and the editor just needs to edit other areas of the code so they can be satisfactory in the given outcome. As for AI Art Generation on the other hand, I also do firmly believe we need more restrictive measures on those systems, like making them far more expensive to use, while fairly giving the person the right to own the art themselves. In conclusion, those things I propose are much better compromises, rather than totally disallowing them.
youtube 2023-05-17T11:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyhYnjmsmT5breAgQB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8sy38DZhvKKbzK6J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWGyHvSN6XMob1UHF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxlxL6Mzrz4KOrctKl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwAEpDQjx_YyXBlD3V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQ7z0mqy9E7W-lqKB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyqdve7kmMbpB8QEQh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxVzTxHeCrDHK50--J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyUGWiFWk0DofDYKt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy9cD8WX4LGbuDj4u94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]