Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wisdom and foresight, two things that the world don't have. What is more likely is that AI is going to be used as excues for awful acts and mistakes. AI is also incredibly overhyped and would likely make way more mistakes than people would think. They would just focus on cases and senarios where they "succeeds". i.e. if you bomb more targets because "AI" says you can in a shorter time frame, you will end up with a higher body counts and defeat your enermies quicker. And they would use the same excues the guy in the video used, well, human does that too. Who is actually going to be able to fact check them ? As of now, "AI" are often glorified spreadsheets that gives you a list of proportions / probabilities based on what you train them on. I.e. IF you think hospitals are high piority targets, they will think hospitals are high piority targets, they now just gets the "green" light to do so. MUCH faster no less.
youtube 2025-06-03T18:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwqq1r0r8UC9hEkbbF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyy2T5xlX1lgZNkilR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyxL2usqDq29Ujr2GV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyUQ4QIbQt9uRKIdT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzaxRsw_S-k7BQm4H94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugz6LJWZ63lpynZFeOx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyqVdEyPFQOQ5mCTth4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwyuu2C6W-MBu3Uly14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwoR4Jf1UfAlE7tmAp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw7YEGaHplslnQHD7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]