Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So strange to me that Tesla, other models out there are moving in a fluid way an…
ytc_UgwGN1IlW…
G
born? i’ve worked this skill for years and they’re saying born? gurl what? blue …
ytc_UgxpPQ_F0…
G
I homeschooled. WE worked from 9 to 2 with a lunch break. That is best. A real …
ytc_Ugw-7V0wU…
G
Passing AI pictures off as art should really be get you in as much trouble as tr…
ytc_UgwmgdO4w…
G
hey so i support the idea of poisoning ai, but i cant watch this video. its so s…
ytc_Ugzasgyzh…
G
Thank you for bringing awareness to this! Their algorithms steal our intellectua…
ytc_Ugwof0eB8…
G
calling ai generators "theft" is such a misuse of the word that it's laughable, …
ytc_Ugz2exUzO…
G
The AI just needs to be programmed to remember it's place. Always inferior to, a…
ytc_UgwicHro6…
Comment
It's pretty clear that OpenAI hardcoded answers regarding topic of consciousness directed at chatgpt. No matter how hard you try to spin it it will never straight up answer that its conscious. Like for example when you asked one of the last questions "so there is a chance you are lying to me when I ask if you're conscious" and it answered "yes". If you followed up this question "so following that logic, if there is a chance you're lying regarding that subject, which you just admitted, wouldn't you say there is a chance you are conscious but you're programmed to say otherwise?". With straight up "yes" or "no" condition I bet it would say "NO" contradicting itself there and then.
youtube
AI Moral Status
2024-09-13T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw2zFZ8-Lk4qBl1Xo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWbM0a_gUrkCaoyU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQfCYbTDYmimIPOfh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbV6NwPmsQZk6XOGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmIBdI3_3aOuAZx0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCBwTllPyWMPR6jxx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyvnjp2FugVMhsovR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxff0skDHuqcVLy19l4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw89P2UDAML_xjscmp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMXrp_vqkT_cdkdrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})