Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robotaxi is cool but the fact that theyre still screwing up on public roads scar…
ytc_UgzEsBFGQ…
G
My ChatGPT told me «You sure are a lovely person» 😳 I’m a «Pleace» and «thank yo…
ytc_Ugz11fkev…
G
This is why the EU has requirements for various use cases of AI. Medicine and po…
ytc_UgyhY1obr…
G
This guy talking about he doesn't want to think about what will happen to his ch…
ytc_Ugwxs1faV…
G
I once had a job opportunity to teach AI to solve chemical engineering problems.…
ytc_Ugzg8c_4a…
G
Trump knew about this data centers he said there will be more factories he knew …
ytc_Ugx7aCA-v…
G
I'm not Alex's fan but I used the same logic to force chatgpt to admit that the …
ytc_Ugwdcgdvv…
G
Anyone who says their AI is sentient but then shows how it refers to itself as a…
ytc_UgxlSKKpK…
Comment
AI, as it is, is bad. It can be useful, but it's not morally right. People should just accept that rather than be delusional.
The one way I use it is with my writing- not to give me something I say is mine, rather, I throw my writing and check for typos, for example. Heck, for my novel, to speed up the translation process, I had it offer me paragraphs already translated so that I could look them over (and catch dumb mistakes where it'd either get genders mixed, cut corners by taking away sentences and ruin the flow of the scene, etc), but it was me working on my own stuff by using a tool to quicken the process. I could also see it as a reference to commission an artist to give them a rough estimate of a base. "This kind of image, but proper". For example, when I commissioned an artist for my novel's cover, I threw several references and such, cropping something out of one picture, something out of another, showing particulars and the like. In that case, I could see also having an AI made picture as your example, then sharing all the details on what you want to be like, just so to help better visualize things. At the same time, though... just how I made do without, anybody could. Even as reference, as you've shown, they are misleading and don't help fully; you still need to do a lot of "this is wrong, this part need to be different" like you would if you just took some pictures.
Besides, while making art is tough, requires effort... it's also fun. I enjoy writing. Yes, I have times where I doubt myself and can't write for weeks or months. There are times where I wonder if I'm good at all. But then there's the time I'm actually writing and it releases so much dopamine, so much more once I look at the pages I've gone through! To see my novel in my hands, printed and real, gives me such a sense of "I set a goal for myself and I reached it, even if it took me years". Giving a prompt to a machine that throws something at you? That's like going to the market and buying a piece of bread to eat. Sure, it fills you up, but... you probably can't say "WOAH SO MUCH DOPAMINE, THIS EXPERIENCE WAS SO AMAZING". It's just bread someone made and you paid to eat. It'd be more exciting if you actually baked the bread yourself.
The """silver lining""" with the Ghibli filter craze is that the company behind ChatGPT is losing shit ton of money with everyone generating those terrible pictures, so they might be unable to sustain this and it proves such algorhythms can't live forever and are much more costly than going to actual humans?
youtube
Viral AI Reaction
2025-04-06T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyMFVaj2FCH1YMSmOR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqPiPftnfIovI2_G54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzxMh0OLhagVt7bVv94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzdeRGbVzjd-6ylPvd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgznzWmF8NkxR0RoCJB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLLduHhipBlhd9S4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"disapproval"},
{"id":"ytc_Ugxo6plFFrqt5ENoVlF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxqatjR1g6gSiZeSLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxS3L43G0IzXBDUoe14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzufjtTkhEkfxOBAlx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]