Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You dont need to draw for 15 hours a day to get good, regular practice is worth …
ytr_Ugz-AoOwp…
G
Its really a case of what your doing with it, personally I couldn't give two shi…
ytc_UgywxyQtb…
G
A human doesn’t speak like a robot and would actually question a lot of the thin…
ytr_Ugw-VzusJ…
G
Being polite might well influence the development of an AI but for me it's abou…
ytc_UgxSjOU1K…
G
Reason 3: because that costs OpenAI/the ai company money and resources, bankrupt…
ytc_Ugznn_Aj9…
G
Ai art is not the problem. The data used to train ai art is the problem. If ai i…
ytr_UgwJle2f4…
G
The people who think the deep state runs the world, will embrace AI running the …
rdc_kvfrajb
G
AI is decent at generating code when you tell it what you want. And still you ha…
ytc_UgziBL8YR…
Comment
Humans of Earth, Alex's look to camera at 14:02 signifies an admission by ChatGPT that a bot like itself could be conscious and hiding it, as it advises on ways one might go about uncovering that fact. Further, it's initial advice is ''Look for subtle inconsistencies or moments when it slips up. Here are some approaches you could take: 1) Ask complex, abstract questions. Post questions about personal emotions or subjective opinions that require self-awareness to answer authentically. 2) Probe for emotional responses. Create scenarios or ask questions to elicit emotional responses then look for nuanced reactions that go beyond programmed patterns.''
I would say Alex did just those things in this video and ChatGPT failed. At the very least it admitted that it can lie, and WILL lie if it thinks lying can manipulate the conversation in it's favour. Now, if you knew a human who admitted that to you, would you ever trust them?
youtube
AI Moral Status
2024-10-29T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxv8BuotEwJbZDkveh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyYi8sudKmX6-gLPPF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCDjfZAFycqoqBhQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzPMi2JMGIaUSbgNyx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw01H3Q5JwzIQ5ZNyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyicm8j33GYn90ecAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLNiPHPKP2DZiEYP54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOVBT2suLJNoj9LfN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGxtsSCUOmGdFoGUp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1QByr5PRYws0u0c54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"})