Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:40 I am an expert in protein folding. Yes, great advances have been made by AI in this area. However, the problem has not been solved. For proteins for which the structure of at least one related protein is not known, current methods fail. For these isolated classes of proteins, AI gives you low confidence predictions that are mostly wrong. Also, the accuracy of predictions, even for those predictions with very high confidence, is not as good as to make the prediction useful in all applications (e.g. drug discovery). The latter issue might be solved in the near future, but the former issue is not fixable using current methods. My point is that, when AI experts such as this professor are exaggerating AI's capabilities, I suspect that they may also be exaggerating the dangers. We should be careful and consider their predictions of danger but realize that they may be overestimating the danger as much as they overestimate the capability. When three Waymo cars can get stuck in a dead-end street and cannot resolve the standoff, you know that AI is still too limited to be an existential threat to the human race.
youtube AI Governance 2025-12-16T12:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyluWTYMMi4XULxC7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxDb_FiMApxM-dIDnF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyhQwnmV2-C6NRsOTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxo6z21fHP4E78q0Bx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPYOJnqea916fe0kJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz8xq0EGnSlLASxFnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgweF3a_sPxSmxsyQrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwiciwtGvaf1Ublf2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzR0a3XLsg7mfdaNrB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzcMFWpf0CT13UT83N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]