Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is not a new worry but instead an old one that is getting to be more and more realistic. It was addressed a few years ago in the Podcast: The End of the World with Josh Clark which I highly recommend. You might want to have a listen AI is one of 7 (If I remember correctly) current existential risks facing humanity today. Thanks for a great video on a serious topic Just to see what the answer would be I just asked Chat-GPT to see what it would say and here it is: You How far away is the likely emergence of sentient AI ? ChatGPT As an AI language model, I cannot predict the future with certainty. However, experts in the field have varying opinions on when Sentient AI may emerge. Some experts estimate that we may achieve true Artificial General Intelligence (AGI) by 2040, while others believe it may take much longer. It is important to note that achieving AGI is a significant milestone, but it does not necessarily guarantee the emergence of sentient AI. Sentience is a much more complex and elusive concept that requires not only intelligence but also consciousness, emotions, and subjective experiences. Therefore, the emergence of sentient AI may still be a long way off, and we need to carefully consider the ethical and societal implications of creating such artificial beings.
youtube AI Governance 2023-04-01T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyqABI9b_-DTzDeFE94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwdc9Y4q7TUE2q_gGl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzU4FQvoKP8CqH7ZbB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyThcSc0G1OYnDhwCd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzx-gW6wKsmNgCiX7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjlPPFhVY4nRMk80R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxgej8pmx4O42pkTfx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx63Ew_axkjTeIvN8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhFTzwp1qXorZX1SJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx8aHsuguSmlW5MCH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]