Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most of you saud gpt 4/4.1 were good, NAH, THEY WERE NOT, they were the definiti…
ytc_UgwUAzgXX…
G
I don’t like the idea of using AI that copied other art styles. But at the same …
ytc_UgwiibAZj…
G
my art was in the trenches back in 2021, but that was my baby steps and i learne…
ytc_Ugz-ugW_o…
G
@alexgamble4718 thats the stupidest thing to do then. it will achieve nothing…
ytr_UgwDVn0ZW…
G
MIT study shows 95% of companies that have employed AI are no more productive…. …
ytc_UgwvVlfvZ…
G
fence rider on the hard questions whats she scared of. be the robot you were bor…
ytc_Ugy_q4s_s…
G
I believe, at first, it will all depend on who programs/trains the AI. If differ…
ytc_UgyusyRSI…
G
Honestly, if you can, try to Google what your position would get salary and bene…
rdc_hp0nrvm
Comment
Saying this with the understanding that we really don't know how close we could be to creating a super intelligence without understanding how consciousness actually works i nevertheless don't think we are even nearing that reality.
LLMs do not do anything close to thinking it's actually a lot closer to remembering with weighted results. They don't reason and they don't even understand what they are outputting only that output like this is what it's been "trained" to respond with.
Having said that there are numerous actual real world problems happening now with AI and how we use it that are critically important.
What effect does dependency on AI have to society in a similar way to the poorly understood effects of social media algorithms in general.
Unreliable output/hallucination that is confidentally reproduced.
Copyright and intellectual property used by companies used to build AI models.
The effects of training future models on the output of past models.
AI physchosis that is already a significant issue even in this early stage of wide usage of LLMs.
Companies attempting to replace employees with LLMS. Much too early for the current state of LLMS but corporate structure doesn't reward long term planning.
Ethical issues of codifying our current culture values bad and good into models training data.
There are endless issues we need to deal with this technology long before the fabled singularity event is a factor we have to worry about.
youtube
AI Moral Status
2025-10-31T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]