Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
56:30 Weinstein is just wrong 😂 idk if he misunderstood or what. But if I ask fo…
ytc_UgwSws8bW…
G
“Ending” humanity is a rather difficult thing to do. There will always be people…
rdc_nk68qty
G
Do any of you guys remember I Robot? Yeah, I'm not looking forward to this.…
ytc_Ugwe-WV2Z…
G
You think they could use something to automatically recognize if there is a huma…
ytc_UgzaXoWms…
G
I think that AI was used to break some of the biggest international internet pro…
ytc_UgzjhELTC…
G
As a psychology professor, I argued (on my own channel) that superintelligence i…
ytc_UgztyYKXB…
G
And now we have to realize how AI now is manipulating people it's not telling yo…
ytc_UgyizRYSs…
G
Juliette Karen These AI's that data is being fed into: One day, the AI might sa…
ytr_UgxSRFD7t…
Comment
@HarvardBusinessSchool Obviously, a very smart guy ... but still no clear comprehension of timing. Almost all AI experts have a terrifying attitude of getting around to controlling and regulating AI _someday_ ... "when it's really needed".
Consider this: After watching this video, I just had a very detailed conversation with ChatGPT about the likely timeframe needed to develop and implement some form of global AI treaty (or something similar). Based on analyzing the history/process of many past treaties, and filtering for treaties with similar anticipated features, the realistic average time is about 16 years, start to finish. Based on what's been done so far in the area of agreements, and what must still be done, ChatGPT predicts an AI treaty to emerge around 2035. So, 10 years from now.
Question: Predictions from serious people about the emergence of AGI seem to be about 2-5 years. Do any of the legitimate predictions of the AI development time frame that you hear from AI experts make you think that we can wait 10 years to decide and agree on how to use and control AI?
Why is there such a widespread and enormous disconnect between the comprehension of the speed of AI development and the speed at which we need to develop controls?
youtube
AI Jobs
2025-07-05T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzvP4e6HdNYS4ef2xx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzggXqHjN43Gnknrf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeT2j2IiPVvVp3MXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxpaYlcG-W3XyCpMnp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyy-IN95IbgsXe9s1F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqHpeW2_J3ehX-lut4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxra5X5acXBiRfdRzh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy3bA3tpqFYoUSMMbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx10X80s-q4w357GPF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxjIRg_0CJS1cRhQQZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]