Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
(Sorry for going on; I wonder if anyone even reads all of it =) ) One thing I think we should keep in mind is that the claims of sentience is based solely on text conversations about human culture in English. This is important because not only are they then claims about sentience, they are claims about something very similar to us. We haven't been able to carry out communication on that level of abstraction with any other species other than our selves, so basically it's more like us than say even a chimp. Lambda, GPT-3 and others are essentially auto-completes on speed. Basically it asks: based on the data on which I've been trained (which is a substantial part of the whole web), what is a good statistical outcome to follow what I was just given? It does that using statistics and methods that we know inside out, and that are modified versions of methods that are 20-40 yrs old. It doesn't come up with anything it wasn't trained for, if it "has a sense of humor" it's because there's humor in the training data, and we are very good at interpreting text the way we want to. Also, it doesn't do anything but acting as a function to input, i.e. it does not drive the conversation, or ask followups or for clarifications etc. So, again, if it wasn't clear before, to call it sentient (especially based on the conversations) is beyond stupid, and that is the reason why we shouldn't argue about it further, not that the other ethical issues are more important (which they are).
youtube AI Moral Status 2022-09-20T08:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzMVB7SH3FZ1rL7ZCp4AaABAg.9gZg8hKlx619hQNZWJ8uMw","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgzMVB7SH3FZ1rL7ZCp4AaABAg.9gZg8hKlx619hQ_r4Vu9e_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzMVB7SH3FZ1rL7ZCp4AaABAg.9gZg8hKlx619hVgyCM9dpE","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzolWtZTtMrmO4hMLZ4AaABAg.9gA4Wl8YsJa9gBePcUiPff","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyQAbAre3ePrYl-jxp4AaABAg.9f_nnR2_ErE9ff0sSuzPgk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyujMa_Ar4Lo1WRUiJ4AaABAg.9fFb34F7D2T9fI1X0f8BrD","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxKAHhemHpH39Fj3tp4AaABAg.9fD4nyveraD9fJ82-I24as","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx-f-VK7ni3jOTqrDh4AaABAg.9f9vhXmPLXZ9mlvL0aU4dt","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugya6EgdWba_Jxpt38V4AaABAg.9f874Bp4inc9fBfASu_jhP","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugya6EgdWba_Jxpt38V4AaABAg.9f874Bp4inc9fBj1K9pK4c","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"} ]