Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The war is between freedom and convenience. Which one will you choose bc you can…
ytc_UgwNOQtV9…
G
It’s so nice to know that even though there are people who will abuse the power …
ytc_Ugyv6HpzP…
G
Tesla Robot tortured a man who createed them he was electricuting them so they c…
ytc_Ugx_UX6Zi…
G
Facebook is already a wasteland full of cat videos, minion memes, and AI slop. I…
ytc_UgzavKe7P…
G
It’s time for positive change in America’s schools. I hope that the change com…
ytc_UgxxV0RuD…
G
And they laugh at her saying she wants to destroy humans...excuse me not her the…
ytc_Ugx4R_2-a…
G
Elon Musk is the biggest Fraud of the first 24 years of the 21st Century. Anothe…
ytc_UgzJe0q6R…
G
Hello! It seems like you're referencing the game "Detroit: Become Human." While …
ytr_UgyupJHHn…
Comment
Sam Altman's actions can viewed as possibly coming from two very different worldviews:
* One is that OpenAI is in the lead, and wants regulation so that nobody can catch up to them. This is the one I see people jumping to instinctively. It is something that is very often true, but I don't think it's always true, and I think it might be a mistake to jump to that conclusion in every case. It might even be part of the reason here, but I don't think it's the whole reason.
* The other is that Sam Altman is actually very concerned, like Geoff Hinton, Stephen Hawking, Bill Gates, Eliezer Yudkowsky and a whole bunch of other people in tech and working in AI research specifically, that as we develop AI systems to the point of AGI and beyond, we may end up in a situation where this technology gets completely out of control, and that would be catastrophic. If OpenAI voluntarily stops, that doesn't do anything about the other labs that are several months behind them, including Facebook, whose head AI researcher Yann Lecun is rather special among AI researchers in being very cavalier about AI being dangerous.
I think the latter is the more relevant thing going through his head. I honestly think he's too much of a techno-optimist. AI can solve a lot of problems, but the way our society is set up, it's going to cause a lot of problems first, that we could avoid if we had better ways to share wealth.
But I think that the risk AI poses to humanity is *the* big problem. I know a lot of people don't agree. It's very, very easy to dismiss today's AGI systems, but if you read through the kind of self-imposed testing and auditing OpenAI did, you can see that they're taking it very seriously. They had the Alignment Research Center (ARC) test GPT-4 to see if it had the capability to get out of control and do serious damage before they released it. ARC is run by AI researcher Paul Christiano, who has clearly and publicly expressed that his estimation that AI destroys humanity
reddit
AI Harm Incident
1684286820.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkf8niq","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jkeo6bs","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_jkf05of","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_jkg0c65","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_jkey6qd","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]