Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
.....Anthropic literally had to stop a cyber attack that was using their AI for …
ytc_Ugyh6-2Eh…
G
Is it worth getting into coding/graphics at 33, namely in gamedev, with all this…
ytc_UgzBD3WiU…
G
AI models should be free and open source, or the CEOs of this close source shoul…
ytc_UgwrViHPg…
G
how is a machine hot?? it is a can opener.... a very very nice one.... yet not s…
ytc_UghOOeNz8…
G
We dont want ai for doc or lawyer, we want a human who is good at using AI…
ytc_Ugxjdhf3q…
G
@rosemadder5547 🎯 I can see a movement coming where ppl are turned off by AI and…
ytr_UgwdrK7NQ…
G
100% without doubt.
Ai will mostly be used for short term quick profit, at the e…
ytc_UgzaX3RCG…
G
I just took a job at a power company. They explained what they need, and how man…
rdc_hsewau6
Comment
Look from different perspective: as someone who creates AI and from professional standpoint is really impressed by those models I feel like slowing their development by limiting access to data might not be the best way.
Instead I'll aim more into limiting access to AI (paid access maybe or slower, moderated output) and post-creation governance of the pictures. So something like searching the internet for similarities to generated pictures (using AI!) and not outputting them if similarity is more than some threshold (let's say 70-80%) to existing artwork. Or simply making it impossible to input any artist name (except those who aren't affected negatively, so dead for many years or volunteering) in a prompt.
Right to opt out of using their work as samples (tic on a profile recognizable by scrappers is a good idea) seems like no brainer thing that should be implemented right away, but defaulting to asking permission will slow down AI image HUGELY and might hurt its own evolution into something more independent.
Thank you for attending my Ted talk.
youtube
2023-02-08T10:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4ABbMSAh5gy-8hNl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQLpHUcgB46IDwQLh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz2yUHwMhrWfcRlliV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXmpSwD9JMlE-2xHV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyO01wOsaPYlswJ7eR4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwAeEg9F1gI42Bx9NJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMU1uQExm2y4VNAhl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugx8eAuIdyEZ_cEptal4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzELyhmHP44SMcK6n54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZgSb-Gyf-rgH6jFF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]