Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The head of AI 50+ years ago said we'd see the same thing in 3-8 years. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ What we really need is a time machine so we can also go back in time and yell at everyone who also didn't believe in humanity's timely destruction at the hands of our new AI overlords, too. Like... ChatGPT is cool. As someone who has been intimately studied ML and tried to keep up most their life, I think it's an exciting time. But it's not the first, nor the last, time we're going to hear "but an expert said we're all going to die soon because of how powerful this technology is." Most these people are heavily invested in their tech, and have also openly admitted the oncoming future AI winter. Even if there was literally 0% chance of anyone losing their job due to automation (which is not the case), trying to draw attention to your business, and creating a scapegoat as to why your business can't keep flourishing... only makes financial sense. These powerful people have the capacity to stop their projects in their tracks if they want to, this is like TSA level theatrics.
reddit AI Moral Status 1685598978.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jmiu1k6","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jmiyavu","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_jmfrnw7","responsibility":"media","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_jmfyo7p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_jmi5ky3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]