Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a beginner artist, i find it really difficult to find references on the inter…
ytc_UgzIpL6yI…
G
The biggest problem with AI art right now is the method of acquiring information…
ytc_UgzAKVAXp…
G
What a meat, irony at the highest level when he's taking about artificial intell…
ytc_Ugzlrh20Y…
G
THIS DUDE SAYS "I won" AGAINST CHATGPT HAHAHAAHAHAHAHHHHHHHH twin you taking thi…
ytc_Ugz-tKvIJ…
G
So Waymo is learning to drive from human drivers then? Maybe it should be called…
ytc_Ugy9mI67K…
G
I don't think AI Food, Restraunts and chefs fields main kuch kar payega. Kyunki…
ytc_UgxMdvRUL…
G
I recently watched a video of Victor Davis Hanson that looked & sounded just lik…
ytc_UgzYlNRHY…
G
I’m confused about what version the victims were using. ChatGPT 5 has so many sa…
ytc_Ugybk770w…
Comment
>If you're one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it.
>
>Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial.
>
>Once an AI model learns how to render someone, their image becomes a software plaything. The AI can create images of them in infinite quantities. And the AI model can be shared, allowing other people to create images of that person as well.
>
>...
>
>By some counts, over 4 billion people use social media worldwide. If any of them have uploaded a handful of public photos online, they are susceptible to this kind of attack from a sufficiently motivated person. Whether it will actually happen or not is wildly variable from person to person, but everyone should know that this is possible from now on.
>
>We've only shown how a man could potentially be compromised by this image-synthesis technology, but the effect may be worse for women. Once a woman's face or body is trained into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training data sets (in other words, the AI knows how to generate those very well). Our cultural biases toward the sexualized depiction of women online have taught these AI image generators to frequently sexualize their output by default.
>
>To deal with some of these ethical issues, Stability AI recently
reddit
AI Harm Incident
1670619021.0
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_izmub9o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izks94k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izld4i1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_izmka4h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_izn607s","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]