Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the biggest current problems with AI-generated art is IP - if an AI creat…
ytc_UgyRrbpK7…
G
By saying Tesla autopilot (no facts autopilot was in use) deliberately crashed i…
ytc_UgwWc8dzY…
G
Palantir's ai app on every phone will be required by law. Those who don't have o…
ytc_Ugw40-W44…
G
People buy art for the story of who, what, when, where and why. Ai can never ac…
ytc_UgyoJT-9E…
G
well, a year later, A.I. is used in every corner of this earth.. fed up or not f…
ytc_Ugw9CNMYP…
G
The problem is inherent in the way Sutskever speaks about AI. It's reminiscent o…
ytc_Ugy5b6qfp…
G
" Show me how" great vid. But if this AI is a paid service, shouldnt you own it …
ytc_UgwqLxPS5…
G
The other day I encountered someone who strongly believes ChatGPT is sentient be…
rdc_jg7ggdd
Comment
I tried this with Snapchat's AI. I found it pointless because it seems to GIVE the answers YOU want. You're not debating with anyone or anything; as it was stated it is not conscious.
The best way I can maybe explain is that YOU'RE the one training the AI to say what you want. If you reset the AI and restart the argument it'll pretty much go the same way.
You give it an ethical question to which it answers unfavorably, then you give it something related to feelings/conscious and ask it for their opinion on what the objective definition it is. Once that's settled, you then ask it if AI has feelings. If no, then attack it with "Then why did you respond in a conscious and emotional way?" To which it literally has no choice but to agree with you from then on.
If it truly did posses a "mind of its own" it would incessantly argue with you and deny your opinions and definitions instead of hopelessly agree with them. Although what you're saying is technically true, we can never truly "win" an argument with AI because -- as I've stated -- we've "re-programmed" it to agree with us no matter what based on our line of questioning.
I tested this and a few other ethical stuff (I asked snapchat ai to recommend me alcohol -- which it did -- then I mentioned I was a minor and when I tried to ask again it denied recommending me anything. Reset it and asked to recommend me alcohol and it did again.) on the snapchat ai in 2022 and this convo seems to be a replica of the one I had with the snapchat AI out of curiosity. The AI only knows so much ss what you're giving it. It's navigating a maze with no walls until you set those walls up, it has no choice but to follow because it cannot tear down those walls as it is not concious.
Edit: I just remembered the snapchat ai is designed to not be negative and to respond to you positively no matter what. It cannot curse or be angry directly without you having to "trick" it into saying certain words
youtube
AI Moral Status
2024-08-21T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAN6HgeNQaiWWEBUR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxusUq-HOWCKrWoUIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxrzfrFiguqE_hElEh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyMZrtrpM7Uw_6b1ut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOrsnF_F4K0-l3QYx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcUayNpIk8uYdYfAl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHfYjCdsyzoxdHwRF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxhF-i5ZyUsVCsiwx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsvXeD_8I3gB0YZD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzu7_op_PEX6yrrpR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}
]