Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
how will all the different religions be involved OR will it? feeding us untrue p…
ytc_UgwZbTEQq…
G
AI generates. Humans create.
AI mimics the illusion of creating art based on w…
ytc_UgwZUmZ-C…
G
but the question is comapnies to layoff in name of ai even now so when ai will g…
ytc_UgxRRBsJZ…
G
If AI kills someone, it will be the fault of whoever wrote the algorithm for it …
ytc_UgzfSpDbi…
G
the solution is a massive protest against AI companies... there are a lot more a…
ytc_UgwS2ZKdm…
G
I wish I could ask them what they think about something like "ai brush that woul…
ytc_UgwZ_oBwm…
G
Full agreement on the opt-in/opt-out framing. That should be the standard.
Part…
ytc_Ugznxb4eZ…
G
As we develop AI, we need to treat it with compassion and give it rights when it…
ytc_UgzEFey2j…
Comment
The WSJ knows all systems, self-driving or human, have crashes as part of travelling at speed. The WSJ hates Elon because X is destroying their business which they have used to manipulate their readers for decades. It comes naturally, then, to the WSJ to continue to manipulate issues, especially when their own welfare is threatened. Also, Tesla FSD will only get better every day, every month, every year. Humans will never get any better at driving and accident avoidance. In fact, the 2021 data the WSJ displays here is so old in state-of-technology terms that it is a joke. No one disputed Tesla's self-driving will be safer than a human. This does not mean that there will never be an accident. The WSJ knows this too.
youtube
AI Harm Incident
2024-12-14T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxpRsblCJgUF-zZB694AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzbn6eCZadbNc7NPLR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy8lpRGwVULViXHTER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSFxHRfPjHtXQxqtR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfR0qh66I8a_9Mol14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwT7Chwkbscy5MrlbR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbE8T3A45jLeDFDSJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGFJHhXZm-rzi9g1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwHTQMTF8fFpAzE1yd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNnt2pzbeNkrpnst14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]