Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My 15 year old son was hell bent this past year on NOT using Chatgpt like his ol…
ytc_UgzGks3oD…
G
Lucky it wasn't some maniac who wanted a massacre. This is why ai isn't safe, ev…
ytc_UgwfA_Dic…
G
I think it's great you are doing this, but the realist in me says this is far to…
rdc_emok7ks
G
It's over for all these Elon musk fan boys. Soon, humans will be viewed less tha…
ytc_Ugw1OTGdx…
G
AI replacement for call centers is usually a really bad call center. I remember …
ytc_UgwQyv2sr…
G
Totally agree with the sentiments of this video. I honestly thought I would be s…
ytc_Ugw2A3LrM…
G
okay so i love writing. i write when i have nothing else in the world to do/ to …
ytc_UgwcdFgY2…
G
This is why laws must exist that someone must always be behind the wheel of the …
ytc_UgxJb2AFE…
Comment
To answer the main question: Implicit in the idea of progress is that we are making things better. That is, better for humanity, not merely better in itself. If we knew for sure that making better robots and AI would be bad for humanity, then no, there is no ethical duty on our part to make and improve them.
The problem here is that we simply don't know if it will make things better or not. Therefore, while it is not an ethical duty to improve robots and AI, there's no compelling reason not to do so. Perhaps some time in the future it will become more obvious that it is clearly good or bad.
Still the possibilities of AI taking over not just the drudge work of humanity, but practically all work, is such an interesting idea that I think I'll have to write a blog article about it (because it would be too long for YouTube comments). If I do, I'll come back and post a link to the article.
youtube
2015-02-19T19:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg6_c_fnxJFiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjBrm-BO4E1Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggwq5VL_P9YvngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugi28m3CG46xzHgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggmA4p100IU0HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgiqEwaXkqSM-ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi0PpcKcA8VCXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgghsB3quoCVXHgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjzbO8DgHLWlngCoAEC","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiclBN6LTRIL3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]