Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Other than natural disasters the ultra-wealthy, who are in charge of developing …
ytc_UgwzPGvzL…
G
Wouldn’t it be funny if two years from now nobody is talking about AI, it ends u…
ytc_UgwBmNk2G…
G
Thoroughly enjoyed listening to this. Karen is just a highly intelligent woman b…
ytc_UgxKpaOdc…
G
Ai art has its value, for example, changing still pictures over an audio book th…
ytc_Ugwl9t_yo…
G
As a person who works in machine learning and data science, this is the way it s…
ytc_Ugy4b6M9E…
G
AI cannot actually generate original art. It takes existing art and merges it wi…
ytc_UgzVCU2oS…
G
Wow, this is almost a year old! This was suggested to me today and I found it ve…
ytc_Ugz2LJJOl…
G
The hubris of these tech giants is that they think they are going to own and con…
ytc_UgxFssRo0…
Comment
am not applying to appear on the show. I would like to submit a content concept that I believe fits the editorial depth and reach of The Diary of a CEO.
AI discussions currently reach the intellect, but rarely the subconscious. My proposal is a single episode designed to bridge science, ethics, and collective memory.
Format: One conversation, three perspectives
– Geoffrey Hinton (AI pioneer and warning voice)
– Yoshua Bengio (AI safety and ethics)
– Will Smith (cultural figure who embodied AI risk in I, Robot)
The purpose is not entertainment, but emotional recognition: almost everyone has seen I, Robot. Bringing that cultural memory into a serious, grounded discussion about real AI risk could reach audiences that technical debates never reach.
I have no commercial interest, no request for credit, and no desire
youtube
AI Governance
2026-01-15T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxmiU6lqG8uBYsIZWh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKXrDbVOH5TYs0gYl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0T_GonwaZ8l5LYUB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugyk91XFkUkej2F1lB14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy9V9c_Tm_BWfXq-CZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwp_RCj5imMLEEgmlR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxEyO3YEeGfe13fR_F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyZJ2cGxCHqETHA5J94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwo16b_x29d3GIdNpp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwq-qqnHNPibVZEpjJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]