Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@HarvardBusinessSchool Obviously, a very smart guy ... but still no clear comprehension of timing. Almost all AI experts have a terrifying attitude of getting around to controlling and regulating AI _someday_ ... "when it's really needed". Consider this: After watching this video, I just had a very detailed conversation with ChatGPT about the likely timeframe needed to develop and implement some form of global AI treaty (or something similar). Based on analyzing the history/process of many past treaties, and filtering for treaties with similar anticipated features, the realistic average time is about 16 years, start to finish. Based on what's been done so far in the area of agreements, and what must still be done, ChatGPT predicts an AI treaty to emerge around 2035. So, 10 years from now. Question: Predictions from serious people about the emergence of AGI seem to be about 2-5 years. Do any of the legitimate predictions of the AI development time frame that you hear from AI experts make you think that we can wait 10 years to decide and agree on how to use and control AI? Why is there such a widespread and enormous disconnect between the comprehension of the speed of AI development and the speed at which we need to develop controls?
youtube AI Jobs 2025-07-05T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzvP4e6HdNYS4ef2xx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzggXqHjN43Gnknrf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyeT2j2IiPVvVp3MXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxpaYlcG-W3XyCpMnp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyy-IN95IbgsXe9s1F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqHpeW2_J3ehX-lut4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxra5X5acXBiRfdRzh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy3bA3tpqFYoUSMMbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx10X80s-q4w357GPF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxjIRg_0CJS1cRhQQZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]