Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
disclaimer: i am not a computer scientist. BUT i feel like a lot of people in the comments here don't actually get what chatGPT is, which is largely the fault of OpenAI and other actors trying to hype the technology. Calling these models "intelligent" is generous at best. I do not believe them to be capable of consciousness. Large language models are not simulations of intelligence, they are essentially very advanced predictive text generators-- a bit like the predictive text above the keyboard on your phone, except they use insanely large datasets of human-written text to generate text that is much more human-like. While the predictive text algorithm will look at the last word or two that you've typed to predict what word might come next, LLMs might look at something like the last hundred words to predict what comes next. So, while they can often generate text that is clearly comprehensible, they do not 'know' anything and they do not 'think'. This is why algorithms like chatGPT often give results that are full of circular reasoning, nonsense or outright falsehoods. The program does not understand the meanings of words. It just puts words into a particular order based on probabilities calculated from its huge amounts of training data, although the creators constantly tweak the algorithm and hardcode certain outputs to make results look better. (Idk if it's the case with chatGPT specifically, but many chatbots also have humans monitoring and editing their responses live.) So while large language models might be at least superficially impressive, I think it's safe to say that they are neither currently conscious nor capable of consciousness. I don't know if GAI or machine consciousness is possible, but I feel pretty certain that it will not emerge from LLMs.
youtube AI Moral Status 2025-01-31T21:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwtwUoO5i0Nv2AyeJl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzgEUYc_F32iH5Szcp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOYvEVqLj3GwjrDw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyqs8I0HaX4ru_zy-l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyi-wVXMRqHQFjcjL54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz6Oug94VXmVEgPyit4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx-otWDf02ltySOUR94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgygOswspSSPMLIhLLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw_hkIF6icCDaLx1eN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwWLg7M_p00ZsLSgsx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]