Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is so stupid they TOLD the AI to do these things so it did. Dumb fake outra…
ytc_UgxPesWh7…
G
There was a reason why Google was holding in so much if their AI systems until .…
ytc_Ugw0-ts6u…
G
Why not shut it down, now if we know only few of us will get a job if we let it …
ytc_UgwPw2-Vx…
G
You do realize AI is an industry? There is supporting infrastructure. Look at op…
ytr_UgzLWd5ZP…
G
Here's an uncomfortable question, how does A.I art that uses copyrighted materia…
ytc_UgwYJIEY3…
G
Well your video looks like Ai slop if you have 70 workers... they need some grap…
ytc_UgxnU_kSb…
G
AI is the nephulm of rhe tech age..God still Rules and if we corrupt the Makers …
ytc_UgzsE06ok…
G
AI is very very bad for Human race. NI, natural inelegance is far Superior!
#S…
ytc_Ugzz-Gsut…
Comment
Another few, because I anticipate one direction the conversation _may_ take:
The alternative is to demand that LLMs be made unavailable to the public for the foreseeable future -- or at least until they can absolutely be made properly safe and reliable, or someone somehow achieves omniscience & infallibility. Of course, that means not only shutting down OpenAI, Grok/XAI, Google's Gemini platform, Anthropic, Mistral, and tons of other platforms, but also means effectively halting public research, which is thus counter-productive to trying to create safe & reliable AI as is demanded. Either way, the only 'answer' available is making the technology unavailable, and there's no way that will ever happen.
Just as you cannot put the technology of computers, the tape deck, the Internet, MP3s, social media, or any of several hundred other concepts back into the proverbial box, you can also not do the same to LLMs. The technology exists, it is known, it is understood, it can be and has been replicated, and there is no possible way to live in a world now where that is not the case. If the technology is not being used here, it will be used elsewhere, and the many hundreds of billions of dollars of economic churn(whether for good or for bad) that's tied up in the technology will end up elsewhere as well.
No first-world government is going to let that happen... ever. We can't even ban nuclear weapons -- a technology which cannot be used on this planet for safe, 'good', or humanitarian purposes -- so trying to ban AI is even more impossible.
youtube
AI Harm Incident
2025-11-08T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyF5_r0ndin4jXQIgB4AaABAg.APGM7pbqVF7APKbYPzR6HD","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APGFIKlDVwI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APGX6VwTQuf","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHMCvEl3t_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHMhk3Ub7o","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHQgFwvH9i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwXuauikr3KXoQo5uV4AaABAg.APFtP3PSEeHAPFzbJNXceK","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgwtA8H15TjXn-7uoXl4AaABAg.APFt4MX_iM6APH9o3MHUTx","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgyJZCjEp0ZPiz0e9fx4AaABAg.APFmBPrDdsYAPK34z5569K","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgyJZCjEp0ZPiz0e9fx4AaABAg.APFmBPrDdsYAPMIqpYjEUl","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]