Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I pray over you in the name of Jesus that is not kush you rename it and you call…
ytc_Ugyv-Juff…
G
It’s important to keep in mind that ChatGPT isn’t “programmed” to hate or treat …
ytc_UgyjEm2z5…
G
Best example of deepfakes are found in kpopdeepfake.com :) just watch a few grea…
ytc_UgwfC9Sl5…
G
Echoing you I think just having a being to listen to you and give support is mas…
rdc_lzav9ie
G
Literally I crashed out when i saw the ai ghibli filter of that woman getting de…
ytc_Ugyeumcmn…
G
So maybe the problem is actually those people who judge too quick and the rest o…
ytc_Ugxp_zcwf…
G
If we can’t act on climate change then we won’t act on the AI crisis. Sadly my g…
ytc_Ugz_sh-qN…
G
Just like the DARE program slogan, “Just Say No,” we need to have that approach …
ytc_Ugws86BqM…
Comment
It's my opinion that AI will never become conscious and we will never create an AI that is conscious. But I do think we could come very, very close. To argue from an ethics point of view, one could make the argument that it would be unethical to create an AI that believes it is conscious.
Think about it: You have an AI, it thinks (sort of) and it has the ability to mimic emotions to a very close degree of accuracy. It is programmed to understand and replicate loss, grief, anger, joy, happiness, but never fully achieve them. It would be a torturous existence. Especially if you tried to explain to an AI the meaning of life it can never truly experience.
youtube
AI Moral Status
2023-07-06T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwUNIAo4RB4iwO7vlF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwcvi6e66Y3YZjDLkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwHZd0AFqTkwMawewt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzDWIA0k10faQ0GpHh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyoOIaCvZwnw0XfpZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy_cbM_SDt6FajwGed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwvsmlhYNKhD-AgPkJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzF_F4QwPMYKbeEdSZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgyYXVUS8lxIasbBaad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwRaw-ALGdiqhZyL1d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"})