Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These arguments are hilarious since these AI companies claim to have "fair use" …
ytc_Ugwca0eY4…
G
"OK, enough about a potential societal collapse produced by mass unemployment wi…
ytc_Ugzk6tmMW…
G
THANKYOUTHANKYOUTHANKYOU. My mothers a big advocate for AI, like literally she …
ytc_UgwBvg4iP…
G
I’ve been saying for years that these companies should bear the responsibility f…
ytc_UgxTsi5fn…
G
More dangerous than AI is brain technology that can read people's minds and cont…
ytc_UgxN9iejj…
G
For that you don't need an AI machine. A simplest motion cracking code and a ser…
ytc_UgxsRrYrW…
G
Person:I herd you got cheated on I can't amagen how sad you were
Robot:ummm I …
ytc_UgwlLBFrs…
G
Call a company and get Ai they are terrible. I find it difficult that the transi…
ytc_Ugwn80CGD…
Comment
The science fiction writer Isaac Asimov developed the three laws of robotics way back in the 1940's. (Paraphrased)1. Don't harm humans. 2. Don't allow humans to be harmed through inaction. 3. Avoid harm to themselves, as long as rules 1 and 2 are not violated.
I think Asimovs' ideas apply to AI and he deserves a mention in any discussion about the risks of AI. The problem is, how would you implement such control laws within AI? Asimov didn't give an answer to that problem. His robotic "Positronic Brain" was just a fantasy at the time. However; his writings do explore the possible dilemmas of self aware machines.
youtube
Cross-Cultural
2025-10-23T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzLlBWuQK8gFd5H4xV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvD89F5088B5o2iMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7xxCbD7qLKGBjTDx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwh1E69drvZg8aXwcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7UiW_CFw-IU2Dywp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9j4nNndQSjoYGEeF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLs3s_r8_1qO0m_al4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGHNkygxdW3aoncvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxlNJ7VTQbmgVP43l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7AfG-M4dHemy_Ze94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]