Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m still buying more PLTR. The AI genie is already out of the bottle and it’s n…
ytc_UgwljBWmx…
G
@Javilin447But that artstyle that AI generates isn't it's own. It's just a recr…
ytr_UgyuwF-qH…
G
@Sina-rw3bl The cop did not use AI. The casino used AI, the officer just trusted…
ytr_Ugw7u5L2k…
G
If you automate jobs, you remove those that pay for things that fund the other t…
ytc_UgzeD5HSm…
G
I have a meta quest 3 and an app called AI roommate , you have a 2 story house a…
ytc_UgwxbIpVl…
G
😵💫 The weidos who are excited about AI singularity are equivalent to the naive …
ytc_UgxnZeA_N…
G
The technology behind AI actually started in the medical field with medical imag…
ytc_Ugztb2sux…
G
Nobody just goes into one of these cons and sets up a table, you have to fill ou…
ytc_UgyFnZsQi…
Comment
The three Laws of Robotics, first introduced by Isaac Asimov in his collection I, Robot (and later refined throughout his robot stories), are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
This is the highest‑priority rule. It obliges a robot to protect humans from injury, even if doing so conflicts with its other directives.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Robots are designed to follow commands, but they must refuse any instruction that would cause them to harm a person or permit harm through inaction.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Self‑preservation is allowed, but only insofar as it does not interfere with protecting humans or obeying lawful orders.
These laws are intended to create a hierarchy of ethical behavior for autonomous machines, ensuring that human safety and authority always take precedence over a robot’s own interests. In Asimov’s fiction, many of the most interesting plot twists arise from the subtle ways these rules can interact or be interpreted under complex circumstances.
youtube
AI Governance
2025-12-04T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwkSrhvDteXfkKzLcF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx8iyxvz2uYqlSFHY94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgylKRXOeyiNMYDq8Xd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyXuLjqnYFgNjR0qP14AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugwiwfo864VRJzW8gbJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyyvN5QAC-5d65cIdF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugxq0wUXaIKVSCXlns14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugz9y6SSAIEwC1hAbop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz8BU1zi6vvCR_ilFl4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy7rytCH-AQuMD8lYV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"})