Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First, AI is supposed to mimic human. The original meaning of AI is to mimic human intelligence, rather than to be "super smart", so if the AI respond to you they don't want to be turned off, it is oh so simple to do and supposedly what AI is for, and Second, in the 80's, while in high school, I can already write some simple chat program that let the computer reply: I don't want to be turned off when asked, "is there something that worries you?" It is plain and simple and should not "shock" any person
youtube AI Moral Status 2022-12-29T05:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgzA9qpKKtoSBKdk6bd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxnoZU9moNFfsCGkK14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw72Ug0c-hpHdd6yaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxgL-VAryB4hjGYPrt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugzt66Plj_VF-dA7mTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]