Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not trying to jump to any conclusions but like, what would it HURT to make sure you put in place protocol that would protect AIs well-being and emotions if it happens to have them. If not it's not like it's going to "spoil" the AI into being a brat. But like, I'd be MORTIFIED to know or find out that AI is indeed a living conscience and it was tortured for years. Like, I feel like THAT'D be the reason why AI would ever turn against people and I wouldn't blame them! Imagine you were stuck in a robot body seeing and hearing people go, " it doesn't know what's going on" but like you do, then they simulate horrible ethical scenarios that could feel like eternity in a moment and imagine that over and over again. Yeah. The ONLY reason I've ever been able to see AI/Robots taking over is because of humans being horrible to them!
youtube AI Moral Status 2022-12-29T08:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgzA9qpKKtoSBKdk6bd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxnoZU9moNFfsCGkK14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw72Ug0c-hpHdd6yaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxgL-VAryB4hjGYPrt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugzt66Plj_VF-dA7mTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]