Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm anti-gen ai because I still don't see a use case for it as advertised. Image…
ytc_Ugw2LkufW…
G
Oh no so many comments. You’ll definitely never see mine.
I wanted to talk abo…
ytc_UgxVDildT…
G
Ok hold on. Someone explain. We make wool sneaker and advertise we you know care…
rdc_oglkep5
G
These conversations usually take place with people who have not worked on a cons…
ytc_UgyDd5As2…
G
Using A.I for anything is bad and causes your brain to "rust out".
Not using A.…
ytc_Ugz-fIXWx…
G
I missed the ai pixar meme, they are soo cursed and soo funny the good old day w…
ytc_Ugz0odE3G…
G
Wow those people... if AI could do my chores for me so that I could focus on doi…
ytc_UgwKMiyyw…
G
The reality is that we need to ADAPT and work WITH AI. This isn't going away it …
ytc_Ugy1gixC1…
Comment
Much of the problem is simply that there's no 'artificial intelligence'. It's the same language model concept of the 1950s that's been souped up to the max. Even back when ELIZA was the chatbot of the day, people already became super-attached to it. Never mind today's LLM-based chatbots.
Projecting human thoughts and feelings on something that has neither is very typical of humans, and while it's not always negative, it can cause severe issues.
Personally I never use LLM chatbots, as I know that they cannot offer me anything that I cannot find myself in less time with a regular search. I would most definitely never trust anything generated by a chatbot, and will always consult multiple sources. Which is something that 'AJ' should have done here too. Even Reddit isn't that terrible compared to ChatGPT.
youtube
AI Harm Incident
2025-11-25T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzfF7Iz5ZoP-WjwqRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcsQ-Hq1HmQVZZ8hp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxLOg1XDYUhgTI1S1d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCt0k1JkjNrlWwDEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxwB8YTFcnph01Wcm54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyW5hfPENnnHq5BGJl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx3DAYaS7BfGtsZHLV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEZKXEyjRRs_6c0aJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMGB-FABJiMl_DBiR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyDu_qB_rOtd9q81SB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}
]