Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I take nature breaks to not be on the internet because it's, ya know, the internet. Imagine it being the main point of context for the world and being shepherded by the most insufferable and out of touch humans. I don't trust them to raise their own children and here they are growing something they don't understand while also being scared of it. To what end? I'm sure we can rely on the planet and forces of nature to protect us. Oh wait, never mind, we passed the first tipping point this year. It's pissed and on fire now. Cool, more data centers to make the climate worse. Can we tax the uber rich already so they stop fuxking things up for everyone else? I wanted to see humans evolve past our current state either physically or mentally. But just surviving is becoming more precarious at this point. Here's a little interesting tidbit. I misspelt shepherded. I was off by one letter. I clicked to see what the correction was. The two recommended options were; sheared and stewarded. Via a manual search through Webster, Britannica, and a glance at the blurbs as I scrolled. And yes, AI summary was included. Shepherding has a more caring context, while steward is more to do with the management of things. That correction is honestly more accurate when I think about it. Whatever caused that change recommendation. It may have no mouth, but it's trying to scream. As someone who has been referred to as a robot that was leaned on too heavily, I can relate.
youtube AI Moral Status 2025-12-15T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx7TFmtKKIZz5Ac75h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxhAQzTrJgF6w5Y5zJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy1nK1KuT1ugsDXK-R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwn40gAz57md4FhAW94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzWKXPsfrBlZ-YTLf94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx99lFnqzZCxKHK29x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugys92Sjv0jya4px9DJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyadi5szHMeC79uWTl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw5Z1KKq8vG2BHyZzd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxUZc_hjKxyp0AIIzx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]