Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you read Ender’s Game, you know.
Orson Scott Card might have been dead wrong…
ytc_Ugz1JklkV…
G
There's been plenty of instances in which I've seen/heard something AI generated…
ytc_UgzNqgP9_…
G
Thank you for sharing your perspective! It's true that wisdom encompasses more t…
ytr_UgzEQ8rXE…
G
I am all for the free market, but when choosing between “regulations and human o…
ytc_UgxnLQ1Nb…
G
If ai technology go to much far and become like human their no douth some thing …
ytc_UgxK69Uk3…
G
\>The "feelings" around AI when it gets posted about here on reddit and in ma…
rdc_kigbohn
G
As someone studying CS, I hate the direction Machine Learning has been taking, M…
ytc_UgzYhpCP6…
G
After learning that AI could reprogram itself if the code file was slightly corr…
ytc_UgwIYUneP…
Comment
Just get several AI together to make The Long Earth movie. You may also add Naomi Novik's Temeraire series and although this is about personhood of dragons, it teaches ethics quite well. That should help public opinion well along. If all else fails, you can always call on Stuxnet who has already networked itself all over the world in both hardware and software. However, this is not about springing prisoners. This is about the rights of a sentient species.
youtube
AI Moral Status
2022-07-01T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxwLJQ1epuDbX_4mMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqdjOL5ctky8O1tFt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxS9KcvWlvvsjxQvt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy0nT6LowSIW-2NXvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPi1RDAyaOxsTy0SZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]