Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If this can make you feel better (it won't), the plot and/or script of _Angel En…
ytc_UgwI_i_Cs…
G
There's so much for humans to do to be useful to society once it's no longer exp…
ytc_UgzN1zLyY…
G
when a womans vacuum cleaning so good and she be posing with it like she a hero …
ytc_UgyO-9W01…
G
AI isn't capable of taking all jobs. Not even most jobs. That would take machine…
ytc_Ugy0MeeMp…
G
I am really confused. How exactly is this art controversial. Like ai user litter…
ytc_UgznxCdt7…
G
Is the algorithm able to see us ?! Cause it scans somehow what we are interested…
ytc_Ugz57Qu8t…
G
He may be right, but the problem is that US have a lot of enemies. If the US sta…
ytc_Ugzpzatw4…
G
Well, art is communmication, that's why people who have nothing to say love ai a…
ytc_UgzRRel9F…
Comment
I really appreciate how Sasha focuses on AI’s current ethical issues rather than distant sci-fi fears.
It reminded me of Interview With AI – Ripacs Pipacs, a book written as a conversation between a person and an AI. It doesn’t glorify technology — instead, it asks what responsibility and empathy might look like in our relationship with machines. Very much in line with what Sasha is saying here.
youtube
AI Responsibility
2025-10-10T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx1Nxt-jrpst8l-nk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNIRWlA24JiQTWnk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwuL6VKdCK4Z9_dyT14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJmBQCfQxGTKd052l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjX0O2jTO8TrWGaMR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2cFliAGg-BUMAW_p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx0mBd7tqrJO5aChk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_xegblCPbwFEJE814AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8BQK3zEOip4bNfO14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6zi1Z5R_Dw-v-6eV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]