Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@heathsims7091 I get that, but the high level of inaccuracy from some of the major participants lead to incorrect assumptions, the obvious spread of misinformation based on the reporting from the hearings, and will invariably cause some poor resulting decisions related to this very important area. Also, while some people like Sam Altman and Senator Welch were very succinct and sage, others like Professor Marcus and Lindsey Graham were to a large extent grandstanding and misleading others based on a poor set of motivations and limited understanding. Marcus because he is a proponent of a dying AI approach based on Noam Chomsky's now dead-end behaviorism applied to AI (and up until very recently, both Chomsky and Marcus asserted that Deep Learning/Neural Net approaches simply would not work), and instead pushed and sold books and got paid handsomely for lectures based on a bad archaic architecture (and Chomsky for his part was a large part of the cause of the AI Winter because people mistakenly trusted him). Lindsey has been trying to position himself as a "savior of the people" based on the desire to rile-up his base against "big tech" and "liberal technologist", rather than caring enough to learn what AI is. These are examples of things I believe are best left out of important hearings like this.
youtube AI Governance 2023-05-17T20:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxEmg4oycyUkrk8CT54AaABAg.9pp7vCvYhAs9puS6y_pHfU","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwkXJW3MQ1XGsDiTL54AaABAg.9pobI2sOtRw9posqTl2xuW","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxhdPIqezpXW98DDmB4AaABAg.9poZjwP7isd9ppQVm0lFRQ","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugyi_46PDDDtV2DHkJh4AaABAg.9poVjNLvgb89pohdsbDPUq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzRzp7WMa7jTGCjRgJ4AaABAg.9poKXOC0Og-9ptm4lctU0n","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzRzp7WMa7jTGCjRgJ4AaABAg.9poKXOC0Og-9pvy9wnlV1B","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgzRzp7WMa7jTGCjRgJ4AaABAg.9poKXOC0Og-9pwmPHcsI46","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzSSsxcEUDMjy228P94AaABAg.9po9s-lFTF_9poeDUA0Tn2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxkuYKIzoO2mAXJlGF4AaABAg.9pnjiiZZa7E9powNKhn3jQ","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxy0dSSizatC9Sl4Th4AaABAg.9pnhljqROwq9pp3yXmcl_0","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"} ]