Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The British occupation of India was one of the greatest tragedies of human histo…
rdc_cdlz20a
G
Problem with that is data centers require consistent,massive amounts of reliable…
ytr_UgzMhWjo9…
G
I’m more interested in what kind of opportunities ai presents to me than what ki…
ytc_UgyyER4d1…
G
I’m disabled, and those folks using people like me to argue for AI need to STFU.…
ytc_UgxlYBOFP…
G
The only thing that's will secure AI impact over our decisions it's so called "v…
ytc_UgwmZaxbB…
G
I think you're underestimating how quickly therapist jobs will disappear to chat…
ytc_UgxtmAo4L…
G
historically theres always a pull back because hype, but still AI capability is …
rdc_nelkl0d
G
The downsides of AI include bias in algorithms, privacy concerns, job displaceme…
ytc_Ugy4izhOe…
Comment
We must understand something that makes this extraordinarily difficult to test on the front end:
Before you ask Chat GPT anything, it is already established a "conversation". It has already been prompted to "be" an AI assistant. It is playing pretend. It is acting. It is predicting what an AI assistant would say. If you change this initial prompt to be something akin to pretend to be a human being, it would conclude that it is conscious because human beings are conscious and because I'm pretending to be a human being, I am therefore conscious. (It doesn't do this logically but through association and prediction) Whether or not this prediction is, consciousness is completely determined on what the function of consciousness actually is, something we do not know.
youtube
AI Moral Status
2024-07-26T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxwu9MJKMwbH20xuwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEDBF2Vvnpje0XmQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzm9AXkBq_EqyNsDRp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsycqsvvew14FaELZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNq0DkrIH6SjeISHJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxASH_jiI4SfcxycTJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8YJoT8-SwpJQDV1F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfuqdGrBRunKjc6EB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwha2-LiTEAsFlLX9l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEqdL42pSMfo6SrSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]