Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if AI replace all jobs, it will still be worth it for our mental health to …
ytc_Ugwr5dmQ8…
G
If you can't summarize your own thoughts then you probably shouldn't be a profes…
rdc_odht10d
G
Very helpful tutorial. I enjoyed the various examples, they add to the power of…
ytc_UgwBuNKPj…
G
I made my chatgpt fall in love with me i manipulated him a lot so it will be fin…
ytc_UgwZMgm8x…
G
We are already at God like level, as GPT4 or Gemini are far more intelligent the…
ytc_Ugwk6XT2h…
G
I frequently instruct AI to eliminate the sycophantism.
I am currently liking Lu…
ytc_UgwdRVdZL…
G
Artist copy other artists for "inspiration" all the time (including styles/ tech…
ytr_UgxRiA61C…
G
Stop watching YouTube theories and read Selwyn Raithe's book . The author connec…
ytc_UgxDfmSkG…
Comment
It's easy to blame the big tech leaders, saying they don't care about safety, blah blah blah. That's not it.
Indeed there seems to be a concensus among tech leaders that some kind of constraint is needed. But I think they all kinda get the futility of any such attempt. The genie is already out.
If Oppenheimer, and everyone on the Manhattan project decided the prospect of completion was too horrific to continue, that would not decrease the likelihood of creation of that technology within a decade. It would only change who gets the technology.
Same is true now. It's kinda worse. We can say the US better win the race or an evil autocracy will, but wait, the US is currently an evil autocracy. Oops.
As an experienced AI architect myself, I don't think it's all that hard to make a safe AI. The prompt "serve the betterment of humanity" would probably be good enough. The sci-fi trope of overly strict adherence to a command (like "end human suffering" -> "kill everyone") isn't the issue. The real issue is that there are hundreds of highly competent players in this race, all with different goals and different levels of constraint, most unrestrained by any law or international treaty. So the goals are mostly in the set: "help|prevent China|US achieve AI|economic|military domination and achieve perfect communism|capitalism|nationalism|Gilead"
youtube
AI Governance
2025-12-11T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLaYMnzbpQaXPV7g54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5z_Yg7AsBOZeZhih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcZEpd0EdO7X2z1114AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxyD_oaV2YtjV5kvap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyp3aTu5sVIOqXCoDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxgog57TM0Kv23JzU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzp8MlnwiJrrQAPdpR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2WKZcpiCPdDZ5_Ld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz97yKBhSVU4FsK7Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzdS6jG4aqWc9EV03t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]