Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need about 1000 more Karen Haos. These companies don't want to invent something good for people. They want to invent super-corporations. Companies that are so leveraged and powerful that they can usurp governments. That is where this is going. And Karen Hao is exactly right that these AI myths and fears are just marketing to gain public trust without having important public debates about what we want. For instance, do we want a society where compute power is consolidated into a few companies and controlled by a handful of unelected people? They basically want us to accept the idea that we should not own cars ourselves. But, instead pay subscriptions for taxi service. I am 100% confident that if people better understood the future that these AI companies envision for us (conveniently enriching and empowering themselves) we would all reject it. All that said, I would really like to better understand what Karen Hao means when she refers to the harm to millions (billions?) of people all over the world that AI companies are inflicting. One thing I am sensitive to and do not want: "corporations are greedy and imperialistic, therefore we need to put this in the hands of government!" That would just be out of the frying pan and into the fire. In my view, the solution is to put this into the hands of the people. And that is pretty easy to accomplish: by insisting that companies like NVIDIA simply do what they have done for most of their existence, make better GPU at realistic prices for the individual consumer market. Not for the datacenter investors only.
youtube 2026-04-12T10:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyvqmy1JTCn-XTE8IJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzxDjEK-OU4AkYSQXZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxlXfemTggiGOouoWF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxbTz4YfVbph8ZV-eR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxkO7Yk-p2xliVkBlZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyEVkmgwgplnyunVLl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0O_6ea5NmHrc42xx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzf9p7qV8_HKoJRAup4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFc8jbqj87KrIJZLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgynHOiKzbe6fhrh5gt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]