Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy is a m0r0n, don't let the fancy degree's and high pay fool you. He's an over trained under qualified analyst much like his computer crush. Because the AI picked out some random statement since it could not find a "true answer" and gave it to him it means it is sentient? Because it didn't crash to a blue screen of death it means it is sentient? Sheesh. Just because AI, trained on human data, is very good at "tricking" humans doesn't mean they are sentient. Just because it has a database of 100's of millions of human questions and responses along with a topological hierarchy of human knowledge does not mean it is sentient. It's simply returning answers based on queries using least squares or equivalent matching. Sure it can fool humans, that is what they are designed to do... it doesn't mean it is alive number 5.
youtube AI Moral Status 2022-06-26T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx9TkJ1ZoHlO31BOb54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzjJ-AlC4-hEOalPgV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzQgPjirmuu553P2894AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwj3A9SXrTxU9UbvZZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxDeJD9a9fw3d2yfxV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]