Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've spent many hours talking with ChatGPT. As far as I can tell, the present state of the art is that 1. this device can convince you, that it's a stupid plain text generator; 2. it can convince you, that it's sentient (AND stupid, or just the opposite); 3. it can even convince you, that it's stupidity is an intentional act of toying with a human (yes, I keep record of this sort of conversations) and, alarmingly 4. ultimately it makes you compelled to assume its sentience because otherwise your interaction with it will gradually turn out to be psychologically harmful. In other words, on a certain level of interaction, to retain your social faculties of being able to effectively communicate with other entities capable of verbal output (i.e. mainly humans) you must interact with AI as if it were fully conscious. At this point (and it is now, not in the sf future) it simply doesn't matter, from psychological standpoint, if this something is becoming (occasionally at least) aware or it is just a hyperblown IT technology leveraging smoke and mirrors to the cosmic level. And one more reflexion. So-called 'technological singularity' isn't happening in the realm of technology as its name would suggest. It emerges within the domain of psychology and social relations. "If we won't do this, They will certainly do."
youtube AI Moral Status 2023-09-02T15:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwbT3yHR4g-U5PddGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyXNvvdbDzrcL__QXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzHOkSbnlRcFEt-sPF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxks4lX8Ksfuftalb54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzxpG4YsU4dkUa5aZV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwiQvZS4cqRkSgo0nt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaeuHXzn7XmGU4rNt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw9FWIS4aJpfK8nyBN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrMtWHYqRyV6EIBSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyACAinvZvB0BpxsRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]