Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can we start a discussion of how to better the regulations of A.I. There are some very strong points here about the dangers of A.I. on our intellectual wellbeing and the moral compass of Artificial intelligence overall. Stephen: I challenge you to take a more executive role of collecting people who can in fact help develop an understanding of what a moral A.I. looks like and using it as a guidelines for companies to follow. Ive listened to a lot of your podcasts. Some of the people you interview are infatuated with the power of A.I. and all of its potential capabailities. Others like Geoffrey here (and even Elon in that interview clip) are good reminders as to why we need to be more mindful of its regulation. You say youll take our advice seriously when it comes to people you interview. What about when we challenge you to create a panel of foreword thinking/ A.I. industry intelligent people developing a collective understanding of failsafes and security measures for the unknown and very scary future of A.I. Whether youd like to admit it or not, you have a lot of power in the audiences you have gathered. Will you squander that influence? Or use that power to be more active in creating something hugely important? Also. Thank you very much for all of the important work that you are doing.
youtube AI Governance 2025-06-16T12:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxflnY_ovUtQU31DuB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyumz-Al-VxkR-jZrB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzZRYUZe0uZHaYlWbF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQgRbSpGHI_xQv3454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzaKqy-5AEwmV1FFN94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz8BVrTD6z1hwUvcyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw68ZfjRwDILWqDJ354AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzrSoMrETfOfdj2SBx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWSMM8szgpT1AG5CN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzlN9vq7H3hVEgkAsR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]