Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm going to say this from a religious aspect.. I'm a guy fearing man, but if yo…
ytc_UgwaCO-ef…
G
Tesla is comfortably the most solvent and profitable car company in the world.
T…
ytc_Ugy44Uu_6…
G
You are saying "The Self driving emperor is naked", but so many others were prai…
ytc_UgyAAFZhz…
G
AI as teachers in schools bad idea. some parents could homeschool but how smart …
ytc_UgwAh1fjr…
G
The caption shouldn’t be he scares everyone — it should be the reason why ai s s…
ytc_UgwQc_Ngt…
G
@tadeassopek1663 People who hate PewDiePie are the ones who are jealous of his…
ytr_UgxFoiUFM…
G
This is not biased data sets, it's that the data available reflect the reality t…
ytc_UgztGp6E7…
G
Ai art is the equivalent of Chef Mikey. Literally nothing but the worst most slo…
ytc_UgzPCtP-d…
Comment
A KlikBait Title but an interesting presentation.
Humans ... ALL Humans (and most animals) have a set of "morals" that they follow as "rules for life". They might not be "good" they might even be "horrible", they might even be incomprehensible but ... they are there and act, to some extent, as a limiter.
Further, there are limits to the extent that one Human ... or even a group of Humans ... can affect other Humans.
Even the most twisted Human recognizes the need to survive as a species ... to, at some level, protect Human life.
The problem that I see with AI is that it truly has no "morals" ... no behavioral baseline ... no real version of enforceable "ethics".
As it has no "progeny" or, can assemble from parts an "additional" or "new" version of itself ... it has neither a past or a future ... there is only itself regardless of which version of "self" it is.
It certainly has/will have no need of a "Vision of the Future" that is in any way related to any Human's version of a "Vision of the Future".
It certainly has no need of meeting 99% of ANY "Goal" that Humans MUST meet ...
youtube
AI Harm Incident
2025-09-10T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgynaA2QyD2_ge3C8Sd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxCxpPrSkifhaznV8x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyR_I-Whl8D9NwaJqN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxrbDjHGXk73sEQygV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzIFEjWEgLqQj0YdP54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqHIZPnexj3cibMoJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgziWAttFkFUjIreOP94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxsh5McjKZuec42j9V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPCnGnu3uNbCoRQuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzI7_CMWlfbU-zap_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]