Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
New theory: Yann LeCun isn't as dumb as his arguments, he's just taking Roko's B…
ytc_UgxB7Y9xA…
G
@krishnamohanyerrabilli4040 Let me rephrase. If anyone deploys a product that…
ytr_UgzCvw08Y…
G
I enjoy driving. I will NEVER buy the self driving option. What the f**** will …
ytc_UgzCmvszj…
G
Creating an AI would be the biggest threat to humanity. If it is able to think o…
ytc_UggBdfltZ…
G
A man ended his marriage over an AI girlfriend. Like they're not even real!!! Ar…
ytc_UgwRd4GGF…
G
I blame Wired and the book "Out of Control" for both whipping up the hype around…
ytc_UgxR0oPWp…
G
Ok, look, I only use ChatGPT for writing when I want a jump start on researching…
ytc_Ugyvb4Gg9…
G
The point that stands well is exactly this, you can't take a drawing that an AI …
ytr_UgwEIFAaY…
Comment
You talk of amoral AI. But that's not the issue.
The issue is IMMORAL leadership.
And amoral, often ideologically-skewed, science.
Some of the tests they do, even, are horrific. Nauseating.
Some of the things we allow, and brush off, or that gets promoted and sanitised through clever word usage, and an appeal to authority and science....
AI reflects our data. Our data is shaped by the world we live in. It's a mirror.
youtube
AI Harm Incident
2025-07-25T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy5Dh4Mq74mNMRtnGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxABLKYzn6SRFNE5gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugx52rHusWa3jGQ5Nlx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgynJJ2ZN7j5p9AaNnV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQCdoWEM3_rvjlDCx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_KTiz8NuwGrjhrwd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzFT8WrCj1kFCdfXuZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxqiC4XEih9RWeuosh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyWFisqBlIYV9s8ngB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw-B5U8K6QLqnnkAv54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]