Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, it’s getting worse with AI. It started and we need better laws to protect …
ytc_UgxpAN9s1…
G
Thought of some more rules...
11. Don't allow AI access to Bio weapons or chemi…
ytr_UgyGB9d8G…
G
@awesome-codingThat's not the takeaway from that video. You compared it to the …
ytr_UgzqXXcg3…
G
Humans will have a need to want to implant chips into themselves to make themsel…
ytc_UgxlmscQ5…
G
Even a slow take-off of super intelligence is dangerous if the problematic AI mo…
ytc_Ugx3xw4e8…
G
ok, so according to this theory an advanced civilisation will first send AI robo…
ytc_UgxPfevVa…
G
with the way everything is going, getting a ai like the thunderhead from the Scy…
ytc_Ugz5Bh0Oc…
G
AI rocks! It is the most wonderful tool and it will be super beneficial for the …
ytr_Ugz7fCtvJ…
Comment
I think that robots and artificial intelligence are two very distinct things: the first category does not deserve rights because they don't have contiousness. Their only task is to work as their name suggests (from Russian "Работ-", to work).
Artificial intelligence meanwhile should be considered much more similar to a human, or at least to an animal (depends on the intelligence of it) and they should only be implemented in work situations if granted the same rights as we humans.
They could also be implemented in administration and, again depending on their level of intelligence, granted representation in parlaments and such.
The only problem comes with trying to estimate their intelligence: it would probably be a massive task that only once we reach the level of singularity we can tackle. Needless to say, that is a huge problem.
youtube
AI Moral Status
2020-07-28T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWl51xd66j3p-3hs54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5ZPH15izuvOSYYYB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"sadness"},
{"id":"ytc_Ugw2v4yU19slb-zXDqt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxULQRXtDBc_CnGqNJ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxIMp0mytvW_5Xnr-V4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzT2gFdtbIGZVs4xF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyDLUjEeYcUG5ildLh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV3EVVQP7Ej0nA0Np4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxvv_uYMZbCQIdNG7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugys1h8J4WxOiIjihHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]