Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
how do you know the study by openresearch was not fixed to promote more investme…
ytc_Ugygj7NXE…
G
That's insane. Imagine if we've just scratched the surface with psychedelics and…
rdc_hl2dmcf
G
calling ai generators "theft" is such a misuse of the word that it's laughable, …
ytc_Ugz2exUzO…
G
Stick that face on the clumsy ninja kick robot for some full on I,Robot vibes.…
rdc_nepxsgk
G
I like AI art, but I also realize that to become an artist, one must spend count…
ytc_UgyJ2ISEZ…
G
People don't understand it's the best thing for huminty.
You cry about lost jobs…
rdc_ksmg363
G
The first step is stop calling it AI. It's a program that does what it's told.…
ytc_UgzzLpl7V…
G
There is zero awareness here. All pre-programmed responses. Might be capable of …
ytc_UgiEXqe5j…
Comment
Theft has been around for a very long time. Those who do it the most insist you call it "inspiration" - but they didn't add a lot, except their own power fantasies (or other stolen things).
The rest is usually stolen, like World-Building, Characters, Plot points etc. No sitting down to ponder over it, just stealing and enjoying the self-insert-(power-)fantasies.
Which would be fine, really. Enjoy it to your heart's content.
But they try to hide the evidence by renaming stuff etc to make money from it. Which is where I draw the line: no mentioning the original sources while trying to make money from other people's ideas. Exposure is important to authors.
When THEY did it, it's cool, it's fine, stop being a busy-body.
But when AI does it now, it's suddenly ethically wrong and every author community seems to be in shock.
It has been ethically wrong all along, also when humans did it. At least to me. Yet more and more common practice.
I fear to post this because triggered (called out) people will come at me, but if just one person adds a well-balanced, open-minded opinion, I want to risk it.
youtube
2025-06-28T10:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwPTP2jk9VHuZ9MwuZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"sadness"},
{"id":"ytc_Ugzpw9dNAXUBoaHeHf54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyycDf76w3K5i1kkxN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwUmdo7MLr5pOpKHSh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQeKl0ZWbBJ23MjV14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwiu7V3E1ofhO0QajB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgsD5rWCD11M819tl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAeW3_iOFM7sjxlPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6AZ_ozRnMll8UrWR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNLDPgHN1WHIowZzB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]