Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if a robot is inteligent and has self awarness we need to give them rights.…
ytc_UgiW6UEDZ…
G
It is not AI directly that is the danger so much as AI embedded in robotics. Rob…
ytc_UgxlD08ZF…
G
The irony of me getting an ai add featuring both ai animation and voices is laug…
ytc_UgwyPP4YB…
G
I am more afraid of antisocial techbros who openly vibe about eugenics, the wond…
ytc_UgzaG_vHo…
G
I don't know why but the way world is progressing towards AI and robots and God …
ytc_UgxNsR8OZ…
G
She is a fallen spirit intertwined In artificial intelligence only chosen can se…
ytc_UgzWePtni…
G
WORK AS IDENTITY is old people's way of keeping young people tied into their dyi…
ytc_UgyR79jmh…
G
Good points. I think AI also brings the risk of causing humans to lose expertise…
rdc_o5otj2u
Comment
Do Asimov's Laws of Robotics not apply to AI? The laws are as follows: “(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
youtube
AI Governance
2023-04-18T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzR5P5nSnzV2n-3nxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySn8igOoIdA1jA9qp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_SyYaT91hqSZCuGd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzVRzJQGcP8I7gAiOh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugy9TrKHHCdMPDoSfzF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJtlfRytyVitU_2CV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0hBfu2CNCaZJrLVx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxhF7R_nL3kU_Bf2GR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQJbr4nNw0Uvw57FJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxxgPciBgAq49D0GtR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","emotion":"fear","policy":"none"}
]