Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The big points, imo, on this general subject are... ...unemployment statistics aren't impressive. I'm unemployed, but because I do not qualify for unemployment I am not counted in those statistics. All you have to do is narrow the scope and you can make statistics say whatever you want, and make whatever point you want to make. ...businesses need customers, the consumer class is not in much danger here no matter what happens. The goal should be to reduce suffering, not to prevent disaster. ...not everybody will be suited for the jobs the future needs, you can't just re-educate somebody and expect them to keep up, you can't just tell somebody to "learn to code" and expect any results. ...automation is here already, it's a gradual process. Have you seen the new mcdonalds screens? Have you seen walmarts self checkout lanes? More and more, and faster than we think, people ARE losing jobs to automation, it's just misleading to say that they're "losing jobs" when it's more a case of reduced hiring. AI is also here already, it monitors temps, learns their jobs, and then the temps are let go so the AI can do it. This is happening, right now. (and others I may be forgetting right now) In my opinion, rising technology is to blame 100% for government failing to ruin absolutely everybodies lives. It's just too powerful a force for good. I honestly don't care who wins elections, because all parties are technophobic in one way or another, I only care that technology continues to advance despite them. A UBI seems like a fantastic idea to me, it would ease the most suffering and promote the most advancement, as well as accidentally solve a lot of other problems (homelessness, chronic unemployability, illegal immigration) but I'm uncertain if our economy and technology are yet powerful enough to support it. Someday, maybe not today, but someday.
youtube AI Moral Status 2020-02-02T08:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxvRmbO-776mbs_gBN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlYdKiGzbgH1qziiN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4y0hErdGKG11ait94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgynBrwiqp2SZH5Nlh54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyjaJSOuKzLKV58NYx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw56vZ9uqn3TBMnEOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy1PzQusXdtijEHd5p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxzdH3TKRSU0iKLy9R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzatEFpalIQlbEj5pF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxZoIni1WZneRueg5t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})