Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you think your job is safe, you are mistaken. Doesn't matter what it is, by t…
ytc_UgzPrWwUw…
G
The podcast "Our Opinions Are Correct" gave me a way more interesting and nuance…
ytc_UgyfmRzcI…
G
A few years ago when all this stuff first blew up I was excited as an artist for…
ytc_Ugzsu3olG…
G
The only complaint you will worry would a safety in manufacturing is someone pro…
ytc_Ugx485j7w…
G
The mental gymnastics of Ai bros
Also his comment to say 'it's better than the …
ytc_UgwNun0ll…
G
You have to understand that a lot of decisions aren’t based on how good AI is. I…
rdc_n7h1hmr
G
Do you not have any idea that ai harms the environment?
ATP just say you're a l…
ytr_UgxuREuZ4…
G
I wouldn't be so certain.
Obviously the important bit is "If programmed correc…
rdc_gs5vbh6
Comment
What's critically missing is Issac Asimov's Three Laws for Robots:
"The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself."
In effect, Asimov's laws establish a sense of morality within the robot itself, which makes it trustworthy to humans. No amount of regulatory authority can achieve this, EXCEPT by mandating these laws be implemented and enforced in the foundations of all artificial intelligence. Asimov had all this figured out decades ago and wrote extensively about how it all worked out in many of his excellent and prescient books. Look them up, read them and learn something critical to AI and to the future of humanity.
This recent AI bad outcome clearly demonstrates the need for this and why it MUST be done.
Reply
youtube
AI Governance
2023-04-18T02:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwsJkV6FHy4GOtGQXB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPzhqiTXvp2Dzqzoh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxBiYI_-6KWM3u9Wd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzB9ylG0Gn50-JWTsZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzGJpAArBph6vYBFx54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzcZkShY-qRff_ZRRx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxF_8boTpG9C0G0Pjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwInKN-ARy-1cvmm94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw06nZPxp0G6_Vu_L54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbBr9yS9TlsJljLzl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}
]