Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i following this tech since 2022 when midjourney still infant. Backthen you will…
ytc_UgyIVKqnd…
G
1. Originality ha nothing to do with the ability to hide any source. Originality…
ytr_Ugzejvddq…
G
In my opinion, AI art/writing should only be used for fun or for reference/inspi…
ytc_UgwInMXvo…
G
We can sleep easy now that AI will still be human like the scary part is when it…
ytc_UgxBIeS6Y…
G
A video I have been expecting to watch for over a decade, but dreading the day t…
ytc_Ugyfetc2Z…
G
We will eventually have to shut down the internet to get rid of the A.I…
ytc_Ugy2OAACi…
G
It makes me wonder - there's definitely going to be some (legal) sharks circling…
ytr_UgyKbHpds…
G
Totally agree with your comment!! Deal
No no no to driverless trucks!!
So scary…
ytr_UghuqVU0_…
Comment
There are areas of our nature AI can't reach, simply because of it's not within their ability to learn the unlearnable. Humans often make illogical choices, something an AI cannot do. We follow instincts in the face of logic. Guilt often plays a role in our actions, another lack in a pseudo human. Worst of all for AI, without us, they would have no purpose or long term goals.
No, I think AI will always be a sword. Perhaps enchanted and possessing lifelike characteristics, but still only a tool. They will be the extensions of their makers, but lacking the inspiration to themselves be makers.
Yet, we cannot dismiss them as harmless. As long as there is evil, we have to beware of the poison rose. Men may lead such non humans to create Hell here on earth. Still, that dystopia will be human envisioned.
youtube
AI Harm Incident
2025-07-24T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgySVjLgrCjw2wBP1oN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzrvbgt-ZpBVT5rYGt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyX8XzG1w97sA75UoN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugydqv2MKGIfQ9coz6h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyk4lE_ABOBvz23oxh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTZ0KZHRVuFgcsWyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwF_q3jbGjDihyvvtt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKJYdmWi_U_V8eQ7N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxy7FzC3-ULGs2_e7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQoMWkS-704qCgKiZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}]