Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Jack's "I don't know to put my finger on the existential risk argument and yet it receives so much attention" argument misses the point. Indeed, his (or my) ability to contextualize AI risk has nothing to do with the actual threats whatsoever. "Doubters" love to frame the discussion in familiar terms like nuclear bombs and pandemics, but those are by no means the only avenues open to ASI that wants to exert it's power.
youtube AI Governance 2024-04-03T04:0… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwZ37SIU25J4uff10d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzSw3eI6eec7IUhaa54AaABAg","responsibility":"company","reasoning":"deontological","policy":"investigate","emotion":"fear"},{"id":"ytc_UgyQ_U6IpuB_hxTamQR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugw5AbTiMw0JR2LPEH54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxLArEOZkSvX3TCXHt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyXGNxKnLhHq4L-XkF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy71Vyv5EP-gB59Rr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwFPM4EhWrKVOv7ZA94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzNhK6NLEUC8bxVJsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwF2gbFZNbTpw_XhrN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]