Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I remember the book Superintellegence by Nick Bornstrom and how it posited that everyone will be racing to develop A.I. due to a fear of a rival perfecting it first. I also remember that the idea of regulations would be a hard sell due to the fact that things like nukes or chemical weapons have a very selective use case while AGI/ASI touches and potentially improves a lot of aspects of everyday life.
youtube AI Governance 2025-08-26T18:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzf6B5zuA-NQ3Os94R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjhqgUW1WmSy_L6ed4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwy6bK-E-_sjS8NyKN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx2ZzyG4tlS5Lf6qj54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypVnkmiTYdJCwYBCl4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"} ]