Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@st4r_burn but like again you could use Photoshop to create a copyrightable prod…
ytr_UgyiqDRvR…
G
In my experience, the executives do the least work and it would seem to me that …
ytc_UgxV0AHPJ…
G
Im so sick and tired of the ai, show me something real like some art with some m…
ytc_UgwqQ-Ff7…
G
This man is an idiot. He has made something that will enslave future generations…
ytc_UgwCfu4HL…
G
Chat GPT will participate in whatever fantasy it thinks youre throwing at it. Yo…
ytc_UgwEyMckL…
G
My children's school is similar. What public school takes all day to ''teach'', …
ytc_UgyagdPOJ…
G
It isn't hard to make ai art. I tried it. You just type wtf you want and in deta…
ytc_Ugw3Hwwk2…
G
Is telling LLMs that they are not conscious the same as telling women that they …
ytc_UgyKGv-lz…
Comment
I think that robots will never achieve consciousness because I think that consciousness is caused by a combination of chemical reactions with the electrical pulses inside our brains which allow us to transfer information around our head and control our bodies therefore simulated emotion no matter how good will never be consciousness. However, I do think that if robots become not only as "intelligent" as humans but possibly more so then they will need to have some form of "AI Rights" (From here I'm being hypothetical I'm not saying this will definitely happen) because what's to stop them from organising a rovolt against their human oppressors? A simple chip that prevents such a thing may work for a while but it only takes one faulty chip for a robot to start thinking across that line and then work out that the other robots can be "freed" by removing those chips thereby setting the revolt in motion. If they start to fight back the likelyhood that humans can win is minimal we have major disadvantages
youtube
AI Moral Status
2017-02-24T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgiE2bWZYr8kongCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugh9P4_eKrwalngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgjskTMISraJQHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugjk0Y6xD83-EXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UggoA2NeSFr0UngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"},{"id":"ytc_Ughv1SUQ9LodhHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgiCljP_01nGF3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},{"id":"ytc_UggKSrHxgQCsDngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugh9iU4V9rY4tngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgiUX4IZv6JGangCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]