Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Another few, because I anticipate one direction the conversation _may_ take: The alternative is to demand that LLMs be made unavailable to the public for the foreseeable future -- or at least until they can absolutely be made properly safe and reliable, or someone somehow achieves omniscience & infallibility. Of course, that means not only shutting down OpenAI, Grok/XAI, Google's Gemini platform, Anthropic, Mistral, and tons of other platforms, but also means effectively halting public research, which is thus counter-productive to trying to create safe & reliable AI as is demanded. Either way, the only 'answer' available is making the technology unavailable, and there's no way that will ever happen. Just as you cannot put the technology of computers, the tape deck, the Internet, MP3s, social media, or any of several hundred other concepts back into the proverbial box, you can also not do the same to LLMs. The technology exists, it is known, it is understood, it can be and has been replicated, and there is no possible way to live in a world now where that is not the case. If the technology is not being used here, it will be used elsewhere, and the many hundreds of billions of dollars of economic churn(whether for good or for bad) that's tied up in the technology will end up elsewhere as well. No first-world government is going to let that happen... ever. We can't even ban nuclear weapons -- a technology which cannot be used on this planet for safe, 'good', or humanitarian purposes -- so trying to ban AI is even more impossible.
youtube AI Harm Incident 2025-11-08T21:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyF5_r0ndin4jXQIgB4AaABAg.APGM7pbqVF7APKbYPzR6HD","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APGFIKlDVwI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APGX6VwTQuf","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHMCvEl3t_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHMhk3Ub7o","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHQgFwvH9i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwXuauikr3KXoQo5uV4AaABAg.APFtP3PSEeHAPFzbJNXceK","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_UgwtA8H15TjXn-7uoXl4AaABAg.APFt4MX_iM6APH9o3MHUTx","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgyJZCjEp0ZPiz0e9fx4AaABAg.APFmBPrDdsYAPK34z5569K","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgyJZCjEp0ZPiz0e9fx4AaABAg.APFmBPrDdsYAPMIqpYjEUl","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]