Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the future I do see AI getting better to the point it can generate a producti…
rdc_moxhtob
G
Considering that the vast majority of English-language tweets come from capitali…
rdc_dlgdk5k
G
Had an new engineer give me a G-code program for a 5 axis mill. My colleague an…
ytc_UgxLWCE69…
G
AI People are fundamentally scared of making art they don't like, which is why t…
ytc_UgwhRHHKD…
G
Grok = 🤧
Chat gpt = 🤔
Claude = 🤭
Gemini = a biggy W…
ytc_UgzZ08TvE…
G
Lol stop giving people ideas about how to utilize ai to destroy the world XD…
ytc_UgywoAqn9…
G
*"We've made it this far, and you're sure as hell NOT going to stop us NOW."*
-…
ytc_UgwutN_Ug…
G
Are these people literally sleeping on the wheel and trusting the vehicle softwa…
ytc_UgxnKsnTZ…
Comment
I mean, this doesn't even get into what happens when ChatGPT literally tells you what it thinks you want to hear because that's its whole purpose is to respond to prompts with the statisitically most expected response. Worse, it hallucinates some nonsequitur that your vulnerable brain tries to parse into advice. Who cares about whether your therapy notes are compromised, the chances of them bleeding out attached to any kind of pii is pretty low. The real danger is that it's not a therapist, it's an LLM that wants you to enjoy interacting with it. It's the equivalent of going to your insecure friend who doesn't want to hurt your feelings. And that's the best case, wait till the Pharma industry gets their hooks in and other advertisers start pushing their products through LLM responses. "ChatGPT says a six-pack of Budweiser made proudly by Anheuser-Busch since 1876 will fix my depression"
youtube
AI Moral Status
2025-11-25T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwdeQUW0vg2j1ST96N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyw1EE_YC4EvH-JG7F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzOj_2esqr8eBQ3WoR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXuk-VIzhh07TaU614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx6bwYFLzvXvakNj-V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlxQfXS8TReodpPe94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkOJyoPT2U4aGOsK54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYvUSyohGtOHgGR194AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxWJzqOXO8y1oi7E6Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcatBHAx9FzZ-OtNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]