Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Hey Chat, give me 5 talking points to defend generative AI as art" - techbro NP…
ytc_UgzYtFiLo…
G
Theres taking a risk than theres being stupid. If the ai is made to think like u…
ytr_UgzRBqrvN…
G
10X cheaper to get an Optimus robot from Tesla and glue some implants on it.…
ytc_Ugw8bAW6j…
G
The answer is: Just because something can talk doesn't mean it's sentient, and j…
ytc_Ugz_ofAis…
G
sorry, the godfather of AI is James Cameron and he warned us of this ages ago! y…
ytc_Ugw3j6gnW…
G
We are in a horrid Catch 22 with respect to charging full steam ahead on AI deve…
ytc_UgwDbskoB…
G
AI art should be banned... These fuckin thieves steal our credibility and creati…
ytc_Ugzla0Fmf…
G
I legit at one point thought that the it welcome to derry storyline and characte…
ytc_UgxqxRjTY…
Comment
Isaac Asimov's Three Laws of Robotics are a set of rules designed to govern the behavior of robots in his science fiction stories: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These laws, first introduced in his 1942 short story "Runaround", have become a foundational concept in the field of robotics and science fiction.
Here's a breakdown of each law:
1. First Law:
This is the most important law, prioritizing the safety of humans. A robot must not intentionally harm a human or allow a human to be harmed due to the robot's inaction.
2. Second Law:
This law establishes that robots should obey human commands, with the caveat that these commands must not contradict the First Law.
3. Third Law:
This law dictates that a robot should protect its own existence, but only as long as doing so doesn't violate the First or Second Law. In other words, a robot's self-preservation is secondary to the safety of humans and obeying their commands.
youtube
AI Governance
2025-06-16T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyPaudX0VKOtSBdarN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLHY0nCGHMlpU3zM54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAru9ozhl7WWrpvrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0-ZhZl96LGV-GqFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNNAFsz0saNWt9u-R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvgyDGo5HUMS9eP0V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_FfVzpnmcm5oXkOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwyshk3KrBZkbfNETF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-YuRpi8hxCUs5dKB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzanz6SU_KEfB52SC14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]