Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm working as structural designer and i can guarantee you, i will survive, the …
ytc_Ugw9RnLnJ…
G
AI bros will bring up the "don't blame AI, blame Capitalism" excuse. Why can't …
ytc_UgyWCFrHY…
G
I don’t understand how this is a worrying problem if ai has absolutely no way of…
ytc_Ugz1aFSSn…
G
The ability to have a true AI is still far off. We don’t have the processing an…
ytc_UgwHwtgES…
G
The A.I is programmed to learn from experience. It will learn from whatever tren…
ytc_UgzhwrLVM…
G
I live in poverty right now as a 79 year old widow. I know what it feels like an…
ytr_Ugzfln-Du…
G
For another perspective on this from an actual Science Fiction universe (Halo), …
ytc_Ugy1aJjcO…
G
What a bunch of dorks these ai bros are... Nope I take it back, that's an insult…
ytc_UgyVT0k5Y…
Comment
Training AI on the full spectrum of the internet was never going to end well. So much of the internet - and especially social media - is an appalling place, full of lies, cruelty, disturbing behaviour, sick images, cheating, lying, criminal behaviour, and straight up misinformation.
When you’re training an AI, you’re teaching it all that too. You really think a super intelligent being wouldn’t use every advantage it has? If it works for their goal (whatever that is) they’ll do it, because we’ve trained them to have a very, very dark side…
It’s too late now. We’ve entered the endgame. Humans have a long history of being remarkably stupid and of using every single good invention ever for twisted aims in the end. We should never have done this. But human greed was always going to be our downfall. Such a shame.
Just my opinion.
youtube
AI Harm Incident
2025-07-27T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwdpjr2oOIh8P4dRzp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwnPPtZBPAsfjT1X-14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPOTMyKM15e2boW0h4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzvupI59qAm3eHcJZV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4UmE6UkgmmuzEhWt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7mA_okt-11ls3KfJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzmn3HfIjBSFzQ6aQJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzfUWw8_BjVDO8ZyvF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuscjiGOPfTB0lLgt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwo1cVguXvopj2ovNV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}
]