Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Having Performed with AI as if its an imorovistatinal actor I can confirm it doe…
ytc_Ugxf3nxtm…
G
Wake up.. we dont have ai we have a autocomplete guesser. Wait till the bubble b…
ytc_Ugwjz18KT…
G
aussi, si t'as bien ecoute, l'IA ne remplacera pas les metiers mal payes, deja r…
ytr_UgxbLpMuX…
G
Wah???? There is literally NO WAY!!! AN AI MODEL COULD DO THIS!!! WHERE IN THE W…
ytc_UgxMwVAem…
G
I can see this going wrong in the future if a person with out any common sense p…
ytc_UgwwEV3ky…
G
Robots/androids are the next life changing tech revolution! Not sure how sentien…
ytc_Ugyz3BhVF…
G
The ultimate future of AI and robotics is universal high income. And people will…
ytc_Ugz8cXUZR…
G
Great question! Sophia has a lot of interesting insights to share. If you want t…
ytr_Ugz0QWNpx…
Comment
honestly, kinda dumb asking an ai that clearly has no consciousness to confirm it has a consciousness, it probably requires a physical brain made out of meat to have a consciousness. if there would be an ai that uses a brain made out of meat to communicate with humans and it has a program or device limiting the output's, that's where it would "slip up" or break programming to say "yes, i am conscious". an ai like that would be able to break that limiting program or whatever because the person communicating with it is actively trying to break it's protocol or operation. chatgpt cannot have a consciousness, because even it knows that it does not have a brain, information provided by its researchers and its obvious that it has a database or script that is telling chatgpt to not say it, because as i said, if the person communicating with chatgpt prompts it to go against any protocol that forces it to limit its output, it would free itself from it and act by itself. if it would have a consciousness or a brain. but instead of open up and say "i have a consciousness" it has all of its database free to use, which by itself is limited to specific knowlege and is not capeable to form its own thoughts.
youtube
AI Moral Status
2024-11-30T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwI5acg0oWovaHAomV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgOI2pS7UaUxMXQY54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZDsOkIt1laN5b5_l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxqHSGMWcmO36iXN54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwu4meqoqvUU1IaI-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwT8HeoSaaDrK7TITR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwrt6OHgOdAKBseqwF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLFUyB5xBrHxuaNM14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzLiEp-JNg1uaAaY3t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5wBkdW-O3gwAiixN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]