Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People really need to learn that regulation isn’t banning something.
Regulating…
rdc_jiptfii
G
so was the AI what determined a person with stage 4 cancer who's also suffering …
ytc_UgyKarBg7…
G
Trumpians are the death squad. They want to rule and be served. They do not wa…
ytc_UgzEIt3RQ…
G
Can’t blame them. Human connections are very complicated.
More people are busy …
ytc_UgzmvF1Ve…
G
The fact that you're willing to automatically trust chatgpt is telling. I've bee…
ytc_Ugz95YDdd…
G
The moment in the interview where the mother talks about the last conversation b…
ytc_UgxO_ndMS…
G
More and more people are going to start using these chat bots for medical advice…
ytc_Ugzfs8BEr…
G
@slothguy_ so Bob Ross is a bad artists? cause I'm sure he would be fed up with …
ytr_UgyFRVISF…
Comment
Without emotions, wants or desires robots have nothing to guide them except their programming. So far the tasks given to robots are pretty straight forward and simple. Eventually robots will be asked to take on more nuanced tasks which require judgement. Robots don't have emotions or intuition so they will need ethics. This concerns me because their programmers (humans) haven't got this figured out yet. Most or our ethics have more to do with our emotions than with our reasoning capabilities. We are more likely to do what feels right than what is actually right and much of this is based on the way things are rather than the way they aught to be. In other words, human ethics are a bad model for robot ethics. When we look for ethics in what comes next we see that the military is still the lead driver of progress. The last thing we want is a killer robot with human ethics. As I continue my quest for universal ethics I ask myself, "what would I want our robot overlord to do in this situation."
youtube
2016-02-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg4GInPcKkb-HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggbNiIQAwMhLXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggTwzbkp_XndHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghF9MR_ZEVqKngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghDbu_Pkj6s63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Uggcfz830ZCOfXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiAgMSu73Y2NHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjoAwGRrcSK63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggTE2bn7JIxjXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UggHT3fme4glYXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]