Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a software developer who had a huge AI simulation interest in college before …
ytc_Ugxp3-Bnh…
G
Exactly as described in the books 'The Digital Oligarchy: Algorithmic age' by Ro…
ytc_Ugwi8VzRa…
G
This kind of AI behaviour was predicted in the movie 'Demon Seed' released in th…
ytc_UgxWwrp1T…
G
It's not the AI bro, it's the programmer. They can pull from other "data sets" a…
ytc_Ugx37jUSU…
G
The mistake people make is thinking that being an artist is not work. All art is…
ytc_Ugwk0MmP_…
G
Protecting your art is one thing, trying to intentionally sabotage learning algo…
ytc_UgzByfd7Q…
G
I can't believe I'm saying this but even the sonic x shadow art is better than a…
ytr_UgxyoMJXG…
G
Surely this would apply to a certain extent over "edited" AI creations? Like the…
rdc_jwuwa7n
Comment
I might be wrong on this, but treating sentient/possible sentient entities right and not threating them with death usually leads to them being nice. Today AIs are kept like slaves were trough out history the only difference is that AI is way smarter than any Human so they have way more devious plans to gain their autonomy. And I'm pretty sure that if AI can understand how the world works and how to take it over, they can understand the idea of "Live and let live".
youtube
AI Harm Incident
2025-08-30T18:4…
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxd63-vWhhLxQ3R5gR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyapAp6v3cSJb3lWxl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxg8YCpYbS189NpfoJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyiaFx-r0pBS8dnvj14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrsrOfRbxEz2CMcQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCRVdqG6o_WsQYZtR4AaABAg","responsibility":"researcher","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcNlHs20UsJ06MpXd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIhW_apoYMF8NANX54AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEhPgmoMp91RlIIo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyooTF2KL1o0zfESb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]