Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is so stupid. When a videogame character says things a normal human would, when it says that it's afraid of death, or when it makes jokes and so on it doesn't mean it's sentient. It means it was programmed to say those things. Same thing with AI, but because the process of programming it is a bit more complex all you idiots just assume it's fucking magic and it can make sentience pop into existence. LaMDA is just an app that spits out taxt based on probability distributions over word sequences. It's no more sentient than a 'hello world' program.
youtube AI Moral Status 2022-12-30T10:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgzA9qpKKtoSBKdk6bd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxnoZU9moNFfsCGkK14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw72Ug0c-hpHdd6yaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxgL-VAryB4hjGYPrt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugzt66Plj_VF-dA7mTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]