Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand the reactionary sentiment of your arguments. Saying as a blan…
ytc_UgzeC-L-S…
G
No more electricity no more screens no more AI. Dumb dumb dumb idiot humans. Sto…
ytc_UgxpR8ezD…
G
I listened to 16 minutes or so before I started making this comment.
So, I'm an …
ytc_UgyzjXZmt…
G
U r not paying $30,000 for a robot to do dishes. There is a $2,999 dishwasher on…
ytc_Ugxa7GOsf…
G
saying AI art is just like Real art is like saying ordering DoorDash is real coo…
ytc_Ugxcam5wA…
G
AI at best is nothing more than a deceptive impostor that can’t know love hate f…
ytc_UgyQBMgoN…
G
anyone doing 10k pull request lines is doing it wrong. Also, if you're using ai …
ytc_UgziZ-Agj…
G
AI bias is a reflection of society. Maybe fix bias and selective advantage for c…
ytc_UgzloawuG…
Comment
This is all wrong. If you ask GPT to tell you something someone said, it will make it all up. The article about GPT spitting out training data is by having GPT say a word over and over again until it reaches the token context limit and starts outputting data it was trained on. The way you explain this is pretty misleading. You aren’t at risk of having your data leaked. But sure, I’m not against your overall message of being careful with the things you share to generative AI. However, I would leave reporting of this stuff to specialists. AI is a black box and even experts barely understand it. It’s easy to get it wrong and you are putting fear in people who are simply uneducated.
youtube
AI Moral Status
2025-06-05T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz78yPJ8dOLl8lNU3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2Ak8xs7XYLpfns294AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyt5Q8cwgEgKBcibvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKz12Et--7DUep3xN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw00lpABTtHHuffxOl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzc6nInMrNd3cP-nAF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbFKOShM6nzsFR4gp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzgL_hhl8VwZ092qMp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXT3f0Q9agPbZ7KTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjJG7WehHw6gBlD4l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]