Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JedsterNYCI don't really know about dentists and hairdressers. You sit in a ch…
ytr_UgwJkGVgP…
G
6:42 Homework assignment: go read (or watch) "Barbarians at the Gate," (1989/199…
ytc_Ugy7-q78i…
G
Says the guy with the most advanced ai in the country and maybe the world..hope …
ytc_UgxekrllV…
G
I watched this video when it came out three months ago and occasionally it pops …
ytc_UgzDDM5oz…
G
It's almost like replacing humans with an AI - which has zero accountability and…
ytc_Ugwu7JrnM…
G
ChatGPT has been twisting responses 'for the right reasons' since it was first r…
rdc_l5wr1bp
G
ChatGPT does not check its facts, it is not an expert in any field, but people a…
ytc_UgxvsFJI5…
G
@therealJFK1963 Humanity is better off without AI that millions of people are us…
ytr_UgwCBNSrC…
Comment
I guess if history teaches us anything it is that we will definitely enslave robots even if they start to have feelings and desires ^^
we'd do anything if its just convenient
though I guess the real difference is that you can literally reprogram a robot to feel pleasure when finding gold or toasting your bread. I mean you can manipulate humans, but theres a limit. You cant overwrite our biological programming to hate death, pain and oppression. But you could probably program a sentient suicide bomber to feel fulfilled and happy while dying.
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghFOa07-R0FZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggK5dZalIyzLHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggd5zYoujRxG3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UggFH45PnMli83gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggF3rIxhUqsNHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjY4zXR-8mkUHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgjdyJWYWQJnSXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugh76ksslKQeSXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghlqwGuxj_V4HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugh3E2GHdas6rXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]