Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There can be no implementation of AI for "efficiency" until a minimum living sta…
ytc_UgxDdInqI…
G
Once again, people are losing their minds about the consequences of a new techno…
ytc_UgzVO8MGs…
G
@seisdoble94 I think it is you who failed to see something. I'm not suggesting t…
ytr_UgyazJdwJ…
G
I really don't think people understand the full extent to which facial recogniti…
rdc_ghcay0o
G
Guys don't worry none of these companies can figure out how to turn ai into a vi…
ytc_Ugzm5vBwb…
G
@MrGrantGregory ay my bro I was joking cuz, we know everything that comes outta …
ytr_UgxNkvBQO…
G
"Since being added to the list he was shot twice"
So you're saying the AI was co…
ytc_Ugx3DSlgJ…
G
Isn’t this the same guy who told someone that they wouldn’t be able to draw as g…
ytc_UgxCpQa1q…
Comment
We are capable of empathy, we can be held responsible for our actions and we have some sort of inate morality-mechanism that allows us to make decisions in completely new situations and we do it very consistently. We have no clue how to make a robot like that because we have no clue how exactly these things work. Even if very advanced aliens made androids in an atempt to model our brains, it would be difficult for us to say if they are truly like humans because we don't really know how we work. A bunch of IF-THEN statements do not make a human, it has to be creative.
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggMYT3QVEugTngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg4EttFwJ0C_HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggQic20SC1MG3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugh9ZDxiKzDTDXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgheLFoKvgFErngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgggD4wUkJmJlngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghWccnEejCDEngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UggmC4suz5PNg3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugi8KLtUpuUXmngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugh9GeRqG8Yl1HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]