Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't ban AI robots ffs. My future female mate will never exist then. Loners lik…
ytc_Ugyw5T-6n…
G
The other issue is the environmental impacts of AI, it uses a ridiculous ammount…
ytr_Ugy7HdtIm…
G
The only ai danger is rusted parts. Which is danger to themselves. If ai were in…
ytc_Ugy6EWWO6…
G
This is why radar is important for autonomous vehicles but Musk wants to cheap o…
ytc_UgwIGm-P-…
G
All I'm going to say? SKILL. ISSUE. People who defend AI art might actually be c…
ytc_Ugw0GRCNB…
G
3 phones shared between 3 people, AI cannot track or manipulate content w 3 per…
ytc_Ugy0LFkdt…
G
Why does the AI needs to know the Skin Color of the person in any of these scena…
ytc_UgymPSQmI…
G
The only people who are going to be replaced are the ones that are failing to ad…
ytr_Ugw7JAKOU…
Comment
My friend who majored philosophy in Oxford had an interesting thesis on how to tell if an AI is truly conscious. It went along the lines of letting an AI create its own language and assign meaning to words (which has been done before) and the moment the AI creates a word and definition for consciousness, it is truly conscious. Because mankind became conscious when we could define the feeling, look at the stars, and say "I'm here. I exist."
youtube
AI Moral Status
2023-08-21T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7FVSeUckDxDFkFlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTDv_FtVbLnXZTsfd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwrUS9-tXujx_nEQZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLMSwhhkz5ErsWwbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwExT8ZKqhZJ3nrwVd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyR8PEa5Zru6x8qbWp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzACBaVcCtlB3k_4E54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQEEGjdY9BszgsCMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw23nNp6RKYdqmZB494AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIPLrB3JirK8MOddp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]