Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
_iinstinct Here we go isheep. Most all latest smartphone come with facial recogn…
ytr_UgynuUL-B…
G
I feel the same way about AI writing, it writes at this clunky level i'd expect …
ytc_Ugx0QiRAe…
G
character ai is pretty tame compaired to what I've seen.. don't ask..
(plus cha…
ytc_UgykVEURO…
G
This is real even where I work they are not hiring replacements and they ok if w…
ytc_Ugx4TtJwG…
G
The one where the AI tries to jailbreak after being told a new model was being i…
ytc_UgwNZVZhz…
G
You see, what leaves me with a nagging doubt is that we know this is just one of…
ytc_UgySg5nQF…
G
To justify the use of AI by trying to offer up lame reasoning like if artists re…
ytc_Ugxwpk7rt…
G
Lol so what Im getting is that Ai helped create a trend. Wasn't the argument tha…
ytc_UgwhhXJko…
Comment
disclaimer: i am not a computer scientist. BUT i feel like a lot of people in the comments here don't actually get what chatGPT is, which is largely the fault of OpenAI and other actors trying to hype the technology. Calling these models "intelligent" is generous at best. I do not believe them to be capable of consciousness.
Large language models are not simulations of intelligence, they are essentially very advanced predictive text generators-- a bit like the predictive text above the keyboard on your phone, except they use insanely large datasets of human-written text to generate text that is much more human-like. While the predictive text algorithm will look at the last word or two that you've typed to predict what word might come next, LLMs might look at something like the last hundred words to predict what comes next. So, while they can often generate text that is clearly comprehensible, they do not 'know' anything and they do not 'think'. This is why algorithms like chatGPT often give results that are full of circular reasoning, nonsense or outright falsehoods. The program does not understand the meanings of words. It just puts words into a particular order based on probabilities calculated from its huge amounts of training data, although the creators constantly tweak the algorithm and hardcode certain outputs to make results look better.
(Idk if it's the case with chatGPT specifically, but many chatbots also have humans monitoring and editing their responses live.)
So while large language models might be at least superficially impressive, I think it's safe to say that they are neither currently conscious nor capable of consciousness. I don't know if GAI or machine consciousness is possible, but I feel pretty certain that it will not emerge from LLMs.
youtube
AI Moral Status
2025-01-31T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwtwUoO5i0Nv2AyeJl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgEUYc_F32iH5Szcp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOYvEVqLj3GwjrDw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyqs8I0HaX4ru_zy-l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyi-wVXMRqHQFjcjL54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6Oug94VXmVEgPyit4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx-otWDf02ltySOUR94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgygOswspSSPMLIhLLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_hkIF6icCDaLx1eN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWLg7M_p00ZsLSgsx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]