Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OpenAI now basically has 4+ companies that have split off from them because they…
rdc_mz6t3yc
G
We needed a Data Entry/QA person, but my company had me and a couple engineers f…
ytc_Ugzum8T9l…
G
This was soooo sad to listen to. I imagine that this woman feels so impotent not…
ytc_UgxnpKfvZ…
G
This inspired me to force AI to generate a video about how it steals art and the…
ytc_UgxYOEVWW…
G
So much scaremonging about AI. Yes it will transform life for everyone, it will …
ytc_UgyEvqUgd…
G
Why don't these HK folks just chill, have some tea and maybe we can talk about r…
rdc_f1w2ygf
G
all these people that are responsible for creating AI dismissing and avoiding th…
ytc_Ugz1KHb1b…
G
Judging by some of the responses the AI made, it mostly seems like it's quite ad…
ytr_UgwTcRwbg…
Comment
Anybody that writes a story is an author, a good author using an AI to help write a story will only really be using it to break apart writer's block and to feed them ideas or give them some filler between important sections of their stories.
A bad author might use an LLM to generate the entirety of their book.
LLMs are a tool, and saying that an author isn't an author because they use a specific tool is just plain foolishness.
What you're trying to do is delegitimize people you don't agree with by gate keeping the title of "author" from them.
They're authors, are they good authors? Maybe not, but they're still authors.
A 2 year old scribbling gibberish onto a piece of paper is still an author or an artist, they're _also_ a 2 year old scribbling onto a piece of paper and not really making anything of value, but that doesn't take away from the classification.
A bad author is a bad author, but they're still an author there's no reason to try and distance them from other authors just because they use a tool you don't agree with.
youtube
AI Moral Status
2024-08-31T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxNDqj1FlDsiAFspGl4AaABAg.A7ps2H0nu-eA7vD3mWnFO","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugy46Wf0e71bUX0wpaJ4AaABAg.A7pWr6FJeKyA7pbBkKZ6qF","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyNzMF9XuRVCKMzlCt4AaABAg.A7oyWSEGHODAI_z0yFw4vY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugzi6o59I-V8wTiIvop4AaABAg.A7or9F8pyZ-A7rWw1uOVTw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugzi6o59I-V8wTiIvop4AaABAg.A7or9F8pyZ-A7r_bI_QWOA","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzVIinh10SJs8Gpc_x4AaABAg.A7oGBfO-GFWA7q2PFqoFYl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwGwDG6rKFj6Cy7r5J4AaABAg.A7o8YfEGH6qA7t7UP6kWDw","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytr_Ugxz0i65KH2Jil0V69p4AaABAg.A7myu7y_5ZwA7oMJLZnVCs","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwXitAvpkr_fWPJhZ94AaABAg.A7mU_knRuTAA7oL-CGzYK2","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwFO4vYTHYH2f6Eil94AaABAg.A7mK0I42AQeA7oIh6FA2K2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]