Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I understand where you’re coming from, but Sophia’s design is meant to reflect a…
ytr_Ugy1HRN67…
G
Soooo WRONG! - This Is A White-Collar Entry Level WIPEOUT!! - As a programmer an…
ytc_Ugx_TvnH0…
G
That predictive ai shit sounds like it came straight outta movies or something, …
ytc_UgwHD-ZKy…
G
Its called creative destruction. Do you want to return to the day when 50% of a…
ytc_Ugz6dqGUZ…
G
Great video! I agree with your hard stance on ai in the arts! As a new indie aut…
ytc_UgxIaQNTr…
G
LMAO I Just watched Ex Machina and knowing that A.I. like this exists is quite c…
ytc_UgxNdVNai…
G
We appreciate your feedback. If you have any specific questions or topics you'd …
ytr_UgxMhEiBp…
G
No worries, there will be tens of thousands of jobs created fixing all the probl…
rdc_m8g0swh
Comment
The idea that they will only have what they are programmed to have, and thus completely in human control is a bit of a narrow minded idea, because already there are examples of pseudo smart programs developing themselves, such as Google Translate. The engineers an programmers remade it, but didn't predict the fact that the new version of it would create its own original language, which it then uses as a proxy, whenever it's translating between two languages that it hasn't done before. It turned out to be a really effective way of conserving as much information as possible in the translation, something that a simple dictionary translation can't do. It's a simple example of a program developing itself. Does this mean that AI will make emotions for itself? No, but what it does heavily suggest, is that we won't be able to predict what will happen. If an AI comes to the conclusion that to accomplish some given task, the most efficient method is to program itself with such concepts, it will do so, and after that, we'll have re-asses the idea that it's just a tool.
youtube
AI Moral Status
2017-02-24T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi0hj0S4tOJK3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ughn2l5l5nUY93gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghjyLhFY0N9d3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj2Jo_uYDf2v3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjWcRsFfwSE13gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggSkZsWg39NxXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj0QLN4cIFMF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggPezFG5S3VS3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj22OTCNxaAhHgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugg7RpJojOWA93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]