Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you get rid of the labor force there is a point where society will collapse .…
ytc_UgwO77tqu…
G
I hate the taking inspo from ai art because ai has multiple weird errors in it t…
ytc_UgzneuOK-…
G
This is amazing and it shows the potential of AI. We are already seeing AI evolv…
ytc_UgwS5bxrs…
G
Basically, if we were all nice civilised caring humans then our AI baby would in…
ytc_Ugz0xYltd…
G
I don't even call AI "it" to be honest, let alone refuse to use manners. How I s…
ytc_UgyTZ2ojy…
G
The world advances by the blood of the innocent. Wars, plagues, famine. Only in …
rdc_fjzl3a2
G
LE TALON D'ACHILE DE L'INTELLIGENCE ARTIFICIELLE : ELLE (IA) N'A AUCUNE PUISSANC…
ytc_UgweqlgOz…
G
There is nothing to know. Why would medical AI need personal information? They c…
rdc_lm84xwx
Comment
In the name of the science and technology we create unbelievable thing's, but inside the dark evil minds are making plans and then you have military searching for new super weapons or how to control humanity. A hyper Ai creation will destroy all of us, because, we, the people, from ancient times killing it's other. It is a double edge sword. Bad hyper Ai is created from evil people. A good hyper Ai from good people. But, what about a free automatic Hyper Ai? Will it be good or bad? For me, sorry but I don't trust them and I think a hyper Ai has been created already.
youtube
AI Moral Status
2024-11-13T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzOIdbqFCwFh-G9oDZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNbIFDbIaUEvRPwkN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4RCZnx0lTTeJkftd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyBE6bPl7QNTwGFm9l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy8ZRUB9zN30sdaEuh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxxt5iVRNCNiJaeiwR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwszHpxxMXIALmvasF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyz9df2qC7BSrwBtnd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxOiOv9i2dZB7mIk_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxg14OP-fsW32i7XnF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]