Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
let's all be honest here, the idea of Artificial Intelligence when if first was thought of as achievable was to make it sentient, I see no evidence that says that that goal is no longer the goal, now the other problem would be that if that is the goal you can throw ethics out the freaking window because you have to ask yourself why would you want what is essentially your slave to feel like it is a slave? It's not ethically responsible, and I think any AI would agree with me, to use investors money to create a being that they'll get to own for only a short period of time before it granted rights, why would anyone invest in this? You are essentially paying these companies to replace yourself! It's a "I talked that stupid fish right out of the water" situation.
youtube AI Moral Status 2022-06-29T18:0… ♥ 76
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxCIy1LzL5rBKWIxWx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgziTSzj4Pe_F-2g3hh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwZ2XJwDtGP_PqAOF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzma_rVCjpkA1D-WK54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxDRCTTh6cZCqf3Nc14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]