Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@joshanonline Yeah, it's touch and go on the influence part, because it ultimately depends on multiple variables from the training data, to the unconcious biases that get filtered in from the people doing the training in their supervisor/data provider side of things, and then the end prompts they use as pre-prompts to help steer it in the direction they want; not what you want. I have a few LLM's installed on my machine right now. One of them is IBM's Granite, and the other is Meta's Llama. I've got the latter installed two different ways, one via Ollama, and the other way via LMstudio IIRC the name. There's a lot that goes into shaping them into the end version we get to play with. Think of this. With ChatGPT for instance, a lot of its training data apparently was literally us on Reddit and stuff like that. Seriously. My first time ever using it, I asked it to write like me per my reddit name; and it did it without a problem. It was flawless execution, right down to the very last improper punctuations I probably do; like that. I use that semicolon probably improperly a lot of the time, but I don't really care anymore. See that get replicated in front of me without any prior usage at that moment, was uncanny valley territory. Then I tested it further, and found it lacking in a lot of ways. Many of the rest of ya out there have noticed some of those things. The local installed stuff, is kinda superior to ChatGPT in a lot of ways, due to it being more malleable to your needs instead. But they all kind of suffer the same main issue of the companies being hyper careful that people can't misuse them. Even if people still end up doing so anyways somehow. Yes, they work on a very if-this, then-that mind set of things. And it shows most when they false positive on things that aren't quite what they seem. You can coach some of them through it, but you best have a large context window; or they'll forget. That context window seems to be key to them being truely useful in some things. For instance, people say they can't do math. Naw, they can. You just have to keep it to a word problem instead of a numbers problem. Remember those in your textbooks? The kind of math they start to fail at moreso is when things get more complicated. I've made them do full grid arrays with every number in them being ... mostly accurate. That context window again. Started off fine, but then it forgot some details. Anyways. Figured I'd share some hands on experience here. Hope it helps in understanding the situation further. AI, like it or not, is here to stay it seems. How we use it, is what is going to be most important. That, and keeping folk from abusing it. The last part is going to be hard, because to keep the worst actors away with AI, one is going to have to become proficient in working with it on a 1:1 basis as a handler. That means being able to push test scenarios, seeing what flies, pushing the niches, seeing what sticks. I've done some of this on my local machines, and on ChatGPT before thinking wiser of it on there. I've always prefaced those with a footnote or somesort that indicates that it was a test only though, just in case. As a final thing to mention. Check out that base44 that's been popping up in your ads lately, likely, if you aren't using anything to block that. It actually works, more or less. You have to coach it, but the apps it makes actually work. More or less. I keep saying that, because I've only played around with the free trial so far. But that small amount of tokens was able to make something I could totally finish up with little knowledge of ... apparently React is the language it used for that one. Nice thing about this one, is you actually own what you make with it. So... that's going to be interesting to see unfold over the coming months into years.
youtube AI Harm Incident 2025-09-21T05:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzjckZB5hx6_7P8VS14AaABAg.ANLmPqhY2h0ANytgTDJoTu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxxVwOc5K0Y6OrK1a94AaABAg.ANHg5HFRIqYANQAS68CI3n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwOw_CGIBtc7G0UDnB4AaABAg.ANDSkkee_gnAPeHwx8eM_P","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwOw_CGIBtc7G0UDnB4AaABAg.ANDSkkee_gnAPf8785jX41","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwOw_CGIBtc7G0UDnB4AaABAg.ANDSkkee_gnAQFzgT2TTFp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwXzEZb6PWR08q0x0x4AaABAg.AN15iFSEiRsANgNZVLnt9C","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwXzEZb6PWR08q0x0x4AaABAg.AN15iFSEiRsAPPbE8PMZ2t","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugz59SN3SGytpw-gEYl4AaABAg.AN-_-BcEy0-ANJ3Lk1_YVi","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz59SN3SGytpw-gEYl4AaABAg.AN-_-BcEy0-ANK3QnOWCMK","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz59SN3SGytpw-gEYl4AaABAg.AN-_-BcEy0-ANMcfYO5KZT","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]