Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is really facebook but no friends. you give the company your data. instea…
ytr_UgwheK2GK…
G
The entire Silicon Valley is on stimulating drugs. Therefore, they take too much…
ytc_Ugw5jMY0T…
G
As long as artist keep their prices expensive, AI art will always bee wanted and…
ytc_UgxPKdLUo…
G
the second robot or real people was doing the side eye and i think it is real p…
ytc_UgxPPN3Jx…
G
@muff1nat0r Yes but AI devs need apes to put random keystrokes in their machines…
ytr_UgwOQfEvJ…
G
Using chatgpt is like using a robot to lift weights for you and every 5 minutes …
ytc_UgyBryK_o…
G
This is the third child who I have heard took their lives because of A.I or chat…
ytc_Ugxwz5aiW…
G
@HalkerVeil „Also, any image in copyright law can be used to create iterations”
…
ytr_UgxHlkMxv…
Comment
Isaac Asimov's Three Laws of Robotics, introduced in his 1942 short story "Runaround," are a fictional framework designed to govern the behavior of robots and androids in science fiction narratives.
The laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws were originally conceived as a narrative device to explore ethical dilemmas and contradictions in robotic behavior, rather than as a practical blueprint for real-world artificial intelligence or robotics.
Asimov later introduced a "Zeroth Law" in his novel Foundation and Earth, which states: "A robot may not injure humanity, or, by inaction, allow humanity to come to harm," placing the welfare of humanity above individual humans.
Despite their widespread cultural influence, the laws are considered fictional and impractical for real-world implementation due to inherent ambiguities in language, such as defining "human being" or "harm," and the challenges of translating natural language into executable code.
The truth is, humanity is so screwed.
youtube
2026-01-06T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwsl6VlHxoyrI1J02N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJvAXllrV_r_wkGMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOnG1TEGW9ECtqMT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypIriCAjtb5DmnPD94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugym-3TBldTk2Q47BHt4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzrZLlajho4OLSNM1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz02PD01E00C101p9J4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzIi_DRnwv2rvyAD6l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyGNNuk1nIz0l7mmKR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFCLLMuwYMFDmUU1d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]