Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great Interview! 10,000 years ago, we could not comprehend of things we, as humans, can do today. 3000 years ago, we could not do many of the things we can do today. 100 years ago, we could not do effectively what are planning to do in our lifetimes. However, humans have had concepts of non-human powers with higher intelligence (and capabilities) for hundreds of thousands of years, with early evidence of symbolic thought and ritualistic behavior dating to the Middle Paleolithic era. So, even our most primitive minds projected questions about things we could not explain or change. And this has been consistent through our history, even though we have continued to advance our knowledge and powers to control things within our world. This universal phenomenon addresses deep human needs by providing “answers” for life's complexities and uncertainties. I wonder not about the evolution of AI ( as it will be the same as the evolution of previous technologies), but how we will continue to rationalize its impact on the human psyche for religion and omniscient powers? Do we really suspect that an Artificial General Intelligence (AGI) will reject and abandon the ageless human trait to believe in higher powers which has been useful from an evolutionary, psychological, and social perspective. I would rather suspect than any “higher intelligence “ would continue to project and support a human practice that serves to provide comfort, maintain social cohesion, and provides meaning and purpose. If for no other reason than to keep the humans, they serve, complacent and in order.
youtube AI Governance 2025-09-04T13:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxzwKzMTq6cZ5c8dQZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzcuCq_hEC1SqEJRgx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZ9lsVU8_1xeCX_bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq_A7dfqeoIecV_n94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwmd7oswwTrwafWDBx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQZAVYsCSRpOajo854AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxQwGiQCl0ihGxPVW54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy2GXOuOdqNMhU5wlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgziStIsAsC55mNWbkZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzkpO0s8JumrVYYGhZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]