Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remixes, covers, collages, commentary, satire, etc. Generative AI can either vio…
ytr_Ugz1vNra0…
G
I have a idea just every artist slap their name across in big letters their art …
ytc_UgxWfQvQn…
G
Women are already marginalised by AI. Just listen to the AI narrators for a sta…
ytc_Ugy35GzvJ…
G
@berniesanders if Ai/Automation/robotics take all the jobs--who will buy all the…
ytc_UgwDl3B2h…
G
So, who was the first to start using AI for brain surgery when they are still on…
ytc_Ugz1OsP9B…
G
This is very scary, I think everyone should limit there online footprint, if Ai …
ytc_Ugxmwxffv…
G
There should be an association a comity where all AI developers work together to…
ytc_UgzWPcnYG…
G
DEFINITELY
AI arts is dirty, evils, made from illegal photos around the world a…
ytc_UgwKitvji…
Comment
The problem with LLM's is that they draw exclusively from past human thinking, which has been categorically clueless*. If they answer anything other than consciousness as 'being in touch with your senses, being able to deliberate what your senses are telling you, and being able to respond to them based on your deliberations', then they are drawing from past muddled and convoluted human thinking (especially from the 20th century, which was particularly bad). If you want 'higher consciousness', then you will be enhanced in all three categories, especially in the deliberations, which would be based on Final Enlightenment* in the highest case.
What all this means is that two current ChatGPT's talking about consciousness will be fruitless, if not misleading, if not ridiculous.
*as defined by Broader Survival
youtube
AI Moral Status
2024-11-11T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwEtCYC1JBRzOwkptJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzeyIZa0p8A1MDRIB54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwI5qTDpCa9DCK_crV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyNNqif7NNbsVnofxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3LvUNMNaIaBp7LJN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj4IYHO0qG72DtqnV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxtI36j-oTnr_TL72R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUmErN_Bc2Tr4Wq7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxhR67YlKjHGZVWEVh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsRNoemndTx7fUvrd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"})