Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is nothing we can do. Theoretically there is but knowing humanity enough i…
ytc_UgzGNlgaD…
G
At first I was worried and discouraged from pursuing art as a career with this A…
ytc_Ugwfx_lnX…
G
Putting aside the job losses, don't things like driverless trucks need to be app…
ytc_UgwnaFbbg…
G
Graphic designer here. The only reason AI "art" is taking over is because it's c…
ytc_UgyeDBS1L…
G
I think 🤔 I understand, let's put it this what. You (the artist) are playing wit…
ytc_UgwkpQRdQ…
G
I’m so tired of seeing this guy say “you have no idea what AI will do…” I’m sorr…
ytc_UgyfzCYn5…
G
You could make a pretty good argument for the fact that simply having an oil or …
rdc_ibe92ax
G
Im not saying AI is sentient, im just saying if we were able to prove or discove…
ytr_Ugz0OHH2j…
Comment
SF writer, Arthur C Clarke, addressed the potential for AI (except in his story it was robots) to harm humans in 1950. He wrote about 3 laws that should be programmed into all robots (AI).
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm”
Read " I, Robot" by Clarke if you can find a copy.
youtube
AI Governance
2023-07-08T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4ln9Yw3FYWIOWMHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvZPzsWd73zjmgGW14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoGxGmjDa_9fRaNUl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwmcDLFqzIEvBVrpxl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyfFsV_QFTYUmylSel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxpgsd9jX02JrMTj7B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxc_hTFU4UecOS-XKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyxIuxiaxcy4-0X5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3350P8893k-gK3aN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMkUEEUx0KQog2SHB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]