Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have bad news, my vocab adapts to chatGPTs. I never copy chatGPT but I keep so…
ytc_Ugyjz5FFO…
G
Do not overlook the position paper "Copyright Registration Guidance: Works Conta…
ytc_UgyXssjMv…
G
I’m not sure if it was intentionally oversimplified, but that was a pretty disap…
ytc_Ugxa441Vi…
G
The A.I revolution is going to disrupt intangible jobs such as finance, law, and…
ytc_UgzIBewWw…
G
I’ve never seen anyone put anything made by AI on their fridge’s door. That is p…
ytc_UgwFrruUf…
G
And it's the Models we developers feed the AI that cause it to behave this way, …
ytc_UgwuovleJ…
G
>(and yes, I am guilty of eating tropical fruit in the winter). I'm NOT claim…
rdc_gtdr83a
G
Yeah definitely. It's not great to see AI used as a crutch like that, and I see …
rdc_n7tddxb
Comment
i asked Grok a simple question about some local city codes for the town i live it. Grok confidentially gave me an accurate and convincing answer, giving page numbers, paragraph, sections of the code. i went to the cities webside and looked it up and it was completely wrong, not even close. I asked Grok why it was wrong, it literally told me "oops you caught me with that, i looked at several sources and generalized an answer" I then began trying to get it to give the the correct answer and cite sources and give me links. after a few hours it just couldn't do it. it would give me dead or fake links, more lies, more generalizations. I'd say that i recieved about 70 percent accurate information but really none of it was usable at all without accurate sources. AI is very good at sounding convincing while spewing bullshit.
youtube
2026-02-22T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyXwumeMfF-dKQuqbd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyrZ5m0mPx6hb2tsvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyS_sTpNxvJV3RQ9Kh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxr_tyNSZc_Pemj2Lp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzvWKe8_959t9uZ5Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykFcolu5pKHEdu16d4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3zrtF-fQM-JE_yxN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8PV_Xrkh3dNQQspl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzX0YqYEfUyVaApXYF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwFUDNZYfUlpA4PFzp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]