Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Excellent interview - I like that, the interviewee didn’t over complicate his r…
ytc_UgyE5wX3c…
G
Obviously not, and you are making a straw man. We accept as a society that mach…
ytr_Ugw22W07B…
G
Ai always has this weird filter, I find it hard to describe but I always see it …
ytc_UgzFm89fH…
G
I'm an Infrastructure Engineer- and i worked for a company where an h1-b got hir…
ytc_Ugwe4vN5F…
G
MAKE the AI companies pay taxes from all customers coming from each given countr…
ytc_UgyC2A-Ca…
G
Dude, some of those young guys would honestly be better off with robots. Especia…
rdc_lzazswp
G
Andrew Yang 🤝 Bernie Sanders
- The spoils of automation should benefit humanit…
ytc_Ugzl8ZuFm…
G
That post sounds like it was written by AI and that whoever generated it didn’t …
ytc_Ugwqz65j0…
Comment
I'm sorry, but the idea that "I don't know" is not an acceptable answer from a LLM makes me even more skeptical of AI than I am now. It sounds to me like the goal is to keep people using the model even if you KNOW incorrect / unwanted / (even harmful) data is being returned? More proof to me that the majority of LLMs are just glorified search engines used to collect massive amounts of data for marketing research on those willing to use it. No thanks.
youtube
2025-11-19T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwmqSE39iyLF9Mzemh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzYAerrrQN1cFGzuv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxKQbGcwkQCiN22Ibx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugw-jbwUikWyu42QQLh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwkmE8elRaDem8h43h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyAA6Cclk9ioTzhY3l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgsddE9q6ecLOmpZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzEUX_kG29sX3WlNcF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugz9ByUof-k4ASQ3TYl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx9X-CkWUq6Km9w8zd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]