Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think LLMs, by themselves, are really useful tools. Just tools. I can use a rock to pound a nail. A hammer is quantifiablly better at it. Having said that, the economic model alone is troubling. Sora 2? Great! I know they're trying to get us "into it". But yeah. It should cost five bucks to make it do your Dr. King rapping video. Yes, you'll lock out some of creativity. But you'll gate stupid usage. It's what will happen after the bubble bursts. AI is still going to be around. It's just going to start costing the user more to the per-usage. I believe the developers are trying to bubble this. They're trying to get as far as fast as possible before this all falls back. So the genie is out of the bottle and folks won't be willing to put it back in. Like the web. Most people don't want to fully go back to the 80s. It seems nice until you try to get along that way while the rest of the world doesn't. It puts you at a disadvantage. So yeah. Ok, old guy rant completed. I experimented with ChatGPT and humor. Yes, I know I said I was done. I had it tell me made up jokes. They followed a theme of productivity, and seemed like a person who's heard that humor is but doesn't really know how it works, trying to be funny. I had it for 10 jokes at a time and told it what jokes were better. It adjusted to that input and was eventually giving me made up jokes that were in line with what I preferred. Then I asked it to give me a "canned" joke. It gave a quite funny joke. Better laugh than any of the made up ones got me. So yeah. It facsimiles well. But it isn't there yet. But fascinating how it tuned to what I liked and tweaked those. It learns really well.
youtube AI Moral Status 2025-11-06T22:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwFb01OemNswuAdwJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwt5r2i4PRp7uvA6PZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyOrVHMiZT-K7L_B6F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwlCCzierETtNFopoV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzUdFCYI8hxcEaaFvR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzyHvKmyoP0JUNrBwt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrVWjVETGLOxd2VHp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyQtzKrpYkd5DBgS594AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzUJJAgB2QOyZNlue94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwKlu-FMlVmhsJf_6F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]