Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The head of AI 50+ years ago said we'd see the same thing in 3-8 years. https://…
rdc_jmfyo7p
G
This, so much 😭
It takes a non-artist of the lowest caliber to fail at realising…
ytr_UgwifkM1o…
G
i think even old school generative computer art is being mixed up with ai. when…
ytc_UgwKDzVZ3…
G
Reduce personal cars down to singular essentials. They can come with self drivin…
ytc_UgzIri4-X…
G
I love drawing birds (hello autistic special intrest, but anyways-) For a long t…
ytc_UgyHBsC7z…
G
So they've got AI using common sense. Wow, no more hanging out waiting for the …
ytc_Ugz3qkot4…
G
I was in college before chatgpt went public and we frequently had in-class time …
ytc_UgzYVFjjg…
G
There are differences between brain and AI where scientist still can't understan…
ytc_UgzSYK2t5…
Comment
"ChatGPT 4 is capable of abstract thought". Wow, just wow. As a machine learning engineer, this is the dumbest and most absurd piece of lunacy I've heard in a while. No, chatbots are NOT capable of abstract thought. In fact, they're not capable of any kind of thought.
youtube
AI Governance
2024-10-30T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwmx1mfzRB3ufaJiW14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwvr4QdLtpngudMhGJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznYoYW_lZyYKLnE8d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9uYhMtNlbX71RZqR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7xDpdooO5z1-sByx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwKQ7XqXreiTNr4vud4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwVS5xQlf-9JhTA6Mt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWGk4PTktZs1HwLdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgziGB7gap_JId9ZBUp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwhN9OauqwpDGdmsSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]