Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Your Magesties, this year the Nobel communities in physics and chemistry have recognised the dramatic progress being made in the new form of Artificial Intelligence. This new form of AI excels in modelling Human intuition rather than human reasoning. Unfortunately, the rapid progress in AI comes with many short term risks. In the near future, AI may be used to create terrible new viruses and to hurrender lethal weapons to decide by themselves whom to kill or may. We have no idea whether we can stay in control. We have evidence that if they are created by companies motivated by short time profits, our safety will not be the priority. We urgently need to search on how we can get these new beings from wanting to take control. They are no longer science fiction. Thank you.
youtube AI Responsibility 2025-11-10T12:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz7aBIYXHaDZL9FFPh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxy8RtKPnGnPmDqCDt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz0L_rSniWpmseLHxl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy4etuqqBBcpoCjHNd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQ4H_0-DJbGPBW0oB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnrLStaxJHaVz17wp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHadEsKzHBfCYSQop4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzbeCBo9xPOGGfiddB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy8jf84QMRQW6u0RF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzHXSRpjPiwrWKxnnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]