Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am not at all convinced that AI can be reliably trained to not "think" for its…
ytc_Ugw6VQvcg…
G
bro, the real reason people are loosing jobs is the economy is crashing. AI is j…
ytc_UgxF0rbpK…
G
Asimov's Three Laws of Robotics:
1. A robot may not injure a human being or, thr…
ytc_UgzsqBmGP…
G
This is all just regurgitation of original content that the AI was trained on in…
ytc_UgyZhBD0h…
G
By 2030 we will have 70 percent unemployment or more. This will fundamentally ch…
ytc_Ugx03QXdt…
G
Artificial intelligence is technology mostly made in the USA and it is profitabl…
ytc_UgxrpOWRs…
G
I'm sure in a few years the news will be chock full of "Portland PD uses small l…
rdc_g4njkai
G
Living humans have done worse crashes but since it's done by an AI is terrible. …
ytc_UgwvxZCz6…
Comment
Developing morality for humans is difficult enough. Hundreds of years ago most people thought slavery was OK. Less than 100 years ago many people thought colonizing other countries was a moral good. And right now people disagree very strongly over whether abortion is moral.
Even that example in the description is questionable. Some people would say that while it is unfortunate for the child the driver has a right to self-preservation and so it is OK to choose to hit the child rather than drive into oncoming traffic. Others would be just as adamant that the right thing to do would be to sacrifice yourself to protect the child. And then some people would call it supererogatory, that while it would be ethically positive to sacrifice yourself to protect the child that it should still be considered morally acceptable for someone to choose not to make that sacrifice.
Since there is a lot of reasonable disagreement over questions of ethics and morality people should get to decide what sort of moral algorithms they want their AI to have, within reason of course.
youtube
AI Responsibility
2017-08-02T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx5g77qdMUz0syFbst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoMAPoGz3dEOCuF-p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxAzaTqk57_kVDNiO54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMUIXYpcd4m4J-NNt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwdBy83MMNCS6Uc7dp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxI8Dj9j3oHkch2LiR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugj2lLMFsb26AHgCoAEC","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgigHDniRhijr3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugio8dY1VUx40HgCoAEC","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ughzr_hiJXHc03gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]