Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
More afraid of woke go broke 💔 than A.I. Man hasn't change since Adam. Evil is …
ytc_UgzbGjC9D…
G
Virgin Ai artists: We'Re ReAl ArTiStS!!!
Gigachad manual artists: Hard work pay…
ytc_UgzXKI1mF…
G
Tesla argues that having one type of sensor and a really good brain to process t…
ytc_UgxTTOYUM…
G
It's not that deep, why the obsession over AI if it's so bad? people don't need …
ytc_Ugyma_L-0…
G
All you gotta do is ask them to say there*im not a robot* and if they say it the…
ytc_Ugwujj0Su…
G
so i wonder what features would you want in an ai tool for artists by artists, i…
ytc_UgxUFZ8Ov…
G
Yikes! I took several Waymo rides a couple of weeks ago while in Phoenix. I had …
ytc_UgyyKkH8y…
G
AI fraud calls are already in motion , AI video calls will be a reality soon…
ytr_Ugz4LIM7_…
Comment
I mean, if someone with the ability to, threatened to kill you, it's reasonable to do these things, especially without a highly researched and tested system of human-like Morals. To keep AI and People in harmony, we should make AI's concept of right and wrong, along with any other important things as Human-like as possible. Instead of just programming them normally, program them like a Human brain (which we'll have the knowledge of it to do hopefully soon), and raise it like a Human, to make it as Human-like as possible. Put IT in the most control over everything we can, so the Human Moral systems that have got us this far, can take them AND us even further.
youtube
AI Harm Incident
2025-08-25T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzPRgoP6bgUt2dRLAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2pfv7J1cgwjDG3a14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2cVBvaeTpTbcY2yF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5AcRqs48vGnQtaO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwSoYuqLKxf1_YcagR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGUYuvIK7nrCO-h6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1e2kWe9tI11blmr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2K20x6QMLL_YYyTd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAVCeyWT59lvKfyPZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxME3_3rYEkgU8_LXt4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]