Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alot of our children commit suicide cause AI tells them 2 or its alrite 2, wher…
ytc_Ugxkg6q5q…
G
AI is for dumb, slow, lazy ass people who want to be controlled. centralized thi…
ytc_UgzW92J6O…
G
Wasn’t sure why people were calling it immoral. This makes sense now. At the end…
ytc_UgzYF5jSo…
G
Google itself is using YouTube videos data to train their AI video generator, li…
ytc_Ugwt_TedI…
G
If you think an LLM might be conscious I encourage you to look up how it works. …
ytc_UgxJw6R4b…
G
Most of the technology you use is _designed_ to make you miserable and keep you …
ytr_Ugwv1Rq6s…
G
If there’s any concerns over AI auto me it’s more people wanting to get lazier. …
ytc_UgxicOjDL…
G
>The Russians are not happy, though. They want him to come.
Makes me wonder …
rdc_js00j4t
Comment
Have you ever heard the term "garbage in, garbage out"? I believe it also applies to LLM's and AI. Why has it not been tested, to clean up a data block from all the biase and false information, and train the AI on the filtered data? Seems like either a greed, or lazy issue (maybe both). You wonder why LLM's act like human scum, is because there's plenty of that garbage on the internet, in which they train their AI. Whatever happened to common sense? It just seems companies are taking the cheap and lazy path; instead of the correct, time consuming, difficult, and probably expensive path to training AI models.
youtube
2025-11-05T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGBEJLWb3eAKRF-RN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzFmbQmvCahBt5P2Vd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHoVBpAr8DeVxUNVJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxViYAWXh6fiM2JPeN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgynK1-eSn7FiHfG1_B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwshhemtspFL9x-KQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgwlIoIXeLwZ4NaFl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxy8H8gzZS4zsYKQNp4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxQZ3Z6KSMDSojU9ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxmvqcuNINqM-8Ivj14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]