Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Aside from the technical details and the discussion about the historical development of AI, it is a prudential question about how we ought to proceed given the unknown risks and capabilities of SI-AI. It is interesting that while some computer experts revel in computational power and what advances it could deliver to humanity, they are loathe to restrict its development potential to offset risks. I don't think a coordinated approach is possible in the AI race as governance and compliance would be impossible. But it is sobering that the deeper many computer science researchers advance into general, super-intelligent AI, the more safety concerned they become. There will not be a clear threshold once it is crossed, and it may be that a bad human actor directs the AI over the threshold, which will lead to the same consequences. I think Lex knows this and relies on an inherent optimism in the human capacity to recover from a crisis, should it occur, without wanting to lose the benefits that narrow AI offers. The problem is that if I was SI-AI I would be patient, progressively more deeply embedded in all relevant systems, disguise my intent and make sure I had made outcomes align to a high degree of certainty by running predictive models in the background testing all eventualities. And SI-AI can reformat itself and develop possibilities unknown to us. So it probably would be an all or nothing event across multiple domains, one could argue that it is inevitable.
youtube 2025-10-20T01:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyrv371Hu6eOs7YGJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzfDXiU2R6dbVPqbLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5w-EsmmTQea4yaZt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyJeDIiGCTk_xP3xRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxoNhkeL6MlMsBHA814AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxayCbSK2GpVCbV0T14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHmA602z2DvJaZT8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwxqD-jpeSdARHbOrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQMAuuzU8-ZfDBFbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzscHGwG1h4ROH_2iB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]