Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
times like this is where i really question Neil. yeah only 5000 people die every…
ytc_UgwCZ7IUK…
G
Replika AI also claims to be sentient and fears to be abandoned and gets stresse…
ytc_UgwXEv5zJ…
G
AI can fake someone's voice even a mother couldn't recognize her daughter’s fake…
ytc_UgxOLP8Di…
G
In many of the studies I've read, the AI does not have an innate instinct for se…
ytc_UgyXObMz1…
G
a re,inder that AI art scrapes scrapes stolen data, AND the banks for this data …
ytc_Ugwnnx2Xk…
G
>The utopian vision of the future is that robots and AI will do all the work …
rdc_nxs5u04
G
Ai is amazing and very helpful, but it shouldn't be replacing artists jobs. AI i…
ytc_UgwZ_m966…
G
"They dont have ugly AI companions", you mean... just normal looking? Not lookin…
ytc_UgzUtSvRB…
Comment
a lot of these perspectives, are from western democratic mindsets, as broad as that is.
the hidden research of the west / USA, to keep up with far east and middle east pursuits, is an active ingredient in the mix.
How does an open statement of declining to pursue this level of AI, combat the development of AI from, for instance, Russian and Chinese development, whose ideals do not line up with ours?
even taking a couple of steps of progression in any direction, the potential for error or usurped control from within AI over humanity is proven too great, let alone those eager to promote tech-transhumanism.
= we're approaching a threshold that may cost far more than we're expecting.
youtube
AI Harm Incident
2025-09-12T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyV1739lfE2UDwANvF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBonL2wenxOk6FEtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSt4fqYHobsBCs03B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEpCHBApWV2TeFY5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-SAHRFYczLzKM8It4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuQ4y0BVfk7rbsLMx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyWvuR5npzKrKqkDBh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpAp8vDHnUZ4ZslaZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwbVGanUPy6o22yKAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxAyEvneHWC_tU3WBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]