Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At this juncture, it appears to me that the advancement of artificial intelligence is unstoppable. If one nation chooses to halt its progress, who can guarantee that another will follow suit?  My concern is that despite the dire circumstances we face, I observe no technological advancements or military preparations aimed at countering this threat. While it may seem trivial, it is worth considering measures such as stockpiling EMP grenades, EMP tanks, or perhaps implementing a killswitch to disable power across vast regions or even entire countries. This approach seems more pragmatic than merely hoping for a solar flare. Upon reflection, if scientists can collide atoms in an effort to recreate a black hole, it stands to reason that they could also explore the possibility of generating a solar flare. It is uncertain, of course.  Additionally, when I refer to military, I am speaking of the global military forces. The conflict is no longer confined to individual nations. It is a lamentable scenario that humanity might finally unite, setting aside trivial disputes, only to confront the threat of extinction together.
youtube AI Harm Incident 2025-09-12T11:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyjFcMcfyYtlHPomm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkarO9kxvjjw-lkb94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzuTlHpRdvbmzdG52d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyqjEbBzC7_kZy7KzN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqcJ-chluV9laN4054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYF40zaIiJiEw_F9l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzI4B8EQHFlYp-IOEV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwrkANZwWA9epa49fh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxG59SOKo_Q9LAI8_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5Xel-04EsQOyvvUV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"} ]