Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Save 10 rapist already due for death sentence or save 1 innocent child ?"
Ai: "…
ytc_Ugw56BFgG…
G
LOL Joe is SUCH a coward. Joe had a one-season flop on television called "Joe Ro…
ytc_UgxVdBYlI…
G
racist ass victims "look how empty it is" look how messy and unruly it is. yall …
ytc_UgyK_XQTd…
G
It's like a brush? Like a camera? BRUH WHICH ONE? It's two vastly different medi…
ytc_UgyE7CIgH…
G
Funny how everyone thought the machines would take all the labour and trade skil…
ytc_Ugy6AuuHH…
G
This was 2 years ago and I'd like to thank you for great idea. World wide milita…
ytc_UgzkgQB-k…
G
Re making a decision stage: very nice to optimize spendings using AI, but it's a…
ytc_UgxXVjfVF…
G
Another point: considering that these models were trained on Reddit, 4chan, and …
ytc_Ugy_OJ_p4…
Comment
I figured this out in college writing a couple psychology papers. The first one was short and I was feeling lazy, so my intro included various sources that I asked ChatGPT to find. I got an A so I was pumped. On the next paper I initially did the same thing, but got suspicious when like 5 more sources in a row confirmed my intuition on the topic about which I was writing. So, I tried to Google the papers that ChatGPT supposedly cited. They did not exist. ChatGPT gave me flawless APA citations, with intricate abstracts that confirmed exactly what I wanted to hear. I was lucky that a likely overworked TA graded the initial paper I used ChatGPT on. I haven't used the service since unless to write something a little more eloquently than I can come up with in the moment. DO NOT USE CHAT BOTS FOR RESEARCH.
youtube
AI Responsibility
2024-10-24T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgywobXaVLQz0ORDzAZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLPJNMia34y80Fx3F4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybH_PSyZjlHA5U46V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxUG7dv4w2bzYcKHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRbEaz9-JUs9fUTRd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxI4hpFNY0ugOsmnCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2gQQMPl-t6FQxth94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzrW04_3Z7euxmz6rZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzkvu62FmnYR6zYAnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQYvcTZSrxFmJninB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})