Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why can't the world have more Bernie Sanderses? (Sanders's? Whatever.) The world…
ytc_UgxWMxBFW…
G
It’s not GIB-Lee it’s JIB-Lee. At least be able to say it correctly. And who car…
ytc_UgyWdjL19…
G
Good piece. Its important to grok that YC is all about scaled BS, and Sam is a m…
ytc_UgwNtp61H…
G
automation is the future. there are already warehouses where the loading and unl…
ytc_UggVJrwON…
G
Great points! Codoki has been a lifesaver for my team. It really helps maintain …
ytc_Ugy_i5x54…
G
I can only talk for my field, but SE won't get replaced. You don't want your cod…
ytc_UgzCGGeVY…
G
You can tell that its ai by the way too shiny skin and the lifeless look😭😭😭…
ytc_UgzzHZWO3…
G
A.I. just need to close enough, that's why they are good at A.I. stuff. But doct…
ytc_UgxrJQfIw…
Comment
Human fantasy to make robots more human while humans become more robotic and dehumanizes. Apparently, more than 60 years of science fiction have not warned us. Do you think these are the latest models? What they show is obsolete technology. Knowledge shared is quite warning. Some robots have incredible strength to carry things. Are they going to make these robots weak physically? But then a robot can save a human being, see the struggle?
youtube
AI Moral Status
2020-09-17T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzmq9GCKTbJGE5qf5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyF0PQvIdCf8OmQsld4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwxv8vRYLEqBfzibBV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxy_r9tKM71kMlCfjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzgLu63oqjaaYoTP6t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzo55Z9NljII5HswId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzrf-DwsqccnC3USN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLMznhgI9pjSkSuqx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXp6lWcLmthO2ggLF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugweb0APd_1eR9z8MkB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]