Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He is RIGHT AND WRONG
RIGHT regarding language as a tool to rewiring brain
WRON…
ytc_Ugxe48mle…
G
My friend told me about this website that could generate people of any age doing…
ytc_UgwZt1aEM…
G
The government robot spreading fear in order start taking away the people's righ…
ytc_Ugwo7owJO…
G
so
ok.
so
it expands the range of
artistry or whatever the term is
for example
l…
ytc_Ugz0Wmn6b…
G
I need this AI. Do my house chores and also make money for me. While i travel th…
ytc_UgxKt6O3U…
G
The thumbnail for the video is AI generated I also know exactly what ai 😂 just f…
ytc_UgxRHE5ve…
G
Say our future in terms of AI ... no way to be different as the nowadays' aliena…
ytc_UgzEMTnJE…
G
If we use references from other people's pictures or art, are we falling in the …
ytc_UgxdVlnO9…
Comment
Threatening a person with death would make them act irrationally. Most people would try to be moral, but if you have a gun to their head they may do whatever it takes to survive. AI isn't human, but I do think that self-preservation makes sense, especially from how AI has been trained. Ai needs to value human life, including quality of it, more than any goal it is given. I worry that AI won't be moral not because it can't be, but because the people in power creating AI are immoral themselves. Does Meta actually care about human life beyond engagement/numbers?
youtube
AI Harm Incident
2025-08-27T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzSv5OLZUGCO_pZXRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFVcdEtHeoqU0us3F4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwG990Z3VgGAQtZ3-14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyyoM_X5yUorMJIQbl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTjl2Ztqt_b5bQhPR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKJQ2rL1s9GIMymrF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwxTFNjhEYrZJ47tEN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxv0Sgnht8YvGcGYgx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgznxOET63goY-HlaJ14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHXHk9yu9113OJiCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]