Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The best reason to say thank you to ai is that it wastes the companies resources…
ytc_UgwUfxU-g…
G
They're not even CLOSE to self-driving. It's just a slightly more fancy cruise c…
ytc_UgwRNdPzt…
G
How about we don't use AI audio or video in court as evidence when they should i…
ytr_UgxwCezSI…
G
Poppy, Poppy , Poppy... so disappointed in your complete dismissal of AI as reli…
ytc_UgwTuEXQj…
G
Uuummm...... The fact that this can go Soo wrong like I Robot did I'm terrified …
ytc_UgzB4mLZu…
G
I can tell within seconds if an article has been generated using an LLM. The wor…
ytc_UgwM__ZAF…
G
Defective a.i.: *wrecks the assembly line*
That a.i.: I PUT MY HARD WORK YOU LIT…
ytc_Ugyz0NCec…
G
It's crazy to provide all this information about how bad AI is and then call peo…
ytc_UgxyHHBDd…
Comment
8:40 I am an expert in protein folding. Yes, great advances have been made by AI in this area. However, the problem has not been solved. For proteins for which the structure of at least one related protein is not known, current methods fail. For these isolated classes of proteins, AI gives you low confidence predictions that are mostly wrong. Also, the accuracy of predictions, even for those predictions with very high confidence, is not as good as to make the prediction useful in all applications (e.g. drug discovery). The latter issue might be solved in the near future, but the former issue is not fixable using current methods.
My point is that, when AI experts such as this professor are exaggerating AI's capabilities, I suspect that they may also be exaggerating the dangers. We should be careful and consider their predictions of danger but realize that they may be overestimating the danger as much as they overestimate the capability. When three Waymo cars can get stuck in a dead-end street and cannot resolve the standoff, you know that AI is still too limited to be an existential threat to the human race.
youtube
AI Governance
2025-12-16T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyluWTYMMi4XULxC7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDb_FiMApxM-dIDnF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyhQwnmV2-C6NRsOTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxo6z21fHP4E78q0Bx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPYOJnqea916fe0kJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8xq0EGnSlLASxFnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweF3a_sPxSmxsyQrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwiciwtGvaf1Ublf2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzR0a3XLsg7mfdaNrB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcMFWpf0CT13UT83N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]