Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What it is for me is ai image generation should be used for artists who cant dra…
ytc_Ugz9qfA6k…
G
Even if the self driving capabilities were fine on multilane roads, roads with l…
ytc_UgzPDmIG2…
G
Parents for time immemorial have been programming their children with mixed resu…
ytc_Ugzslie87…
G
Why does the Super Artificial Intelligence at a God level wants to kill us?
May…
ytc_UgxtZymCA…
G
Tiktok is saying this is all just a useless thing to talk about, but people can …
ytc_UgyX8sXvY…
G
Why you (mostly me) should waste time on AI which is actually opposite of intell…
ytc_UgwFOfULX…
G
As in many sci-fi books this will inevitably lead to a war with the AI and if …
ytc_UgzfC0w-S…
G
As a trucker for the last 26 years, I always hated the idea of automated trucks.…
ytc_UgzvJBzYP…
Comment
Asimov's Three Laws of Robotics, from Isaac Asimov's science fiction works, are a set of ethical guidelines designed for robot behavior: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Governance
2025-09-24T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzw91VV3WxS4NJK4xh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqOyMtM-RITcOZhLR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxaYfSSknWwESCIedR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzkMyiv1qIvHWdU_lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyj6fEmw5X77Qa2Eat4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybfdkcrGvgox5i7qV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgwObIW7eH1IfTIz1X54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgziGps0f9rZimNnIoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy5KF7Lbg-Woy_PqTN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzWZZVuinbHsNXyNT14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]