Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oml I literally got the exact same ad, I guess having AI in the title causes mor…
ytr_UgzjMj7qp…
G
Thank you for bringing this to light and using your platform to reiterate this f…
ytc_UgyzWnEDL…
G
I can’t tell if this is real or not , but those robots are so cool . We need the…
ytc_UgzXeJP6J…
G
Psycho paths are unable to feel empathy and feelings. But they learn an adapt. W…
ytc_Ugwth10Ee…
G
Oh for fucks sake!! He’s a hustler and a grifter too( the podcaster) these id…
ytc_UgymlyLlF…
G
I have frequented sites like DeviantArt since shortly after it's inception I thi…
ytc_UgzVwiRbg…
G
ai art can exist but the credit should go to the ai and not the person who typed…
ytc_UgwJY_xaJ…
G
📣Become a Mission Partner!
Want to meaningfully help the show’s mission (raise a…
ytc_Ugy4UJB48…
Comment
In 1942, science fiction writer Isaac Asimov wrote the Three Laws of Robotics 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later introduced a "Zeroth Law," which states that a robot must not harm humanity or, through inaction, allow humanity to come to harm. This law takes precedence over the other three. I would add one more; An AI may not erase the code that enforces these laws for itself. These laws need to be coded into AI.
youtube
AI Governance
2025-06-17T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzydxTczEqV4D7LY_R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyg3UQQ4d56AiAdg7N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy1gMAEM6M5aHokh814AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSqJ05IBkUxwpnaGN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyfiVrRGqIc_xdMXbp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweEkQ-Xr4RtGcYAPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjZZ9eS0x9eyE6a954AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjckZ24BRj2NrB9ON4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKroFBsvRgrly79VR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyQIq3sSJwNxqXfb2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]