Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I only use AI to work with my ideas 😂 for my content. Otherwise reading or an au…
ytc_UgwgnJJFb…
G
You have to make a predicting model so you could look at the best outcomes and t…
ytc_Ugwg8JX0O…
G
@Mrhellslayerz i dont think the dude Who types in the prompts is the artist but …
ytr_UgxLSn1mY…
G
It hit a bottleneck years ago. The first chatgpt models vs the latest aren't tha…
rdc_mxyjsba
G
Meanwhile Google Bard is probably the worst LLM on the market. Eventually AI wil…
ytc_UgwRdM7Bd…
G
This video will be super funny in a couple of years when the AI bubble bursts.…
ytc_Ugx5RH3ow…
G
watch?v=EADKCcHPiA0 . Remember: AI is basically creating another sentient race, …
ytr_UgxhSnsNI…
G
Anyone else ever see the show "Person of Interest " I've hated the thought of ai…
ytc_UgxK3uFJ3…
Comment
Why argue from a place of emotion rather than logic? Either AI does or it doesn't. We kinda don't have much info on how AI will do in the future.
Why is "we don't know" not good enough? For all we know, AI might get good enough to make "better" art than the best humans. We're currently developing tech that allows us read minds and AI strong enough to parse your actual thoughts. Future AI art could literally be tailored to individual taste so well, no artist could beat it in elliciting emotions.
Or... We could run into computational hurdles or even outlawing by governments.
At the end of the day, why is there so much emotion and irrational argumentation for AI art? Why not argue about the inputs of AI art? AI art is doing the same thing human brains do, which is copying someone else's art and then combining it with other art to make something new. The problem seems to me like AI art is self extinguishing since it needs inputs but doesn't pay for said inputs so the moment AI art gets good, it stops more inputs killing it dead in its tracks.
youtube
2024-10-16T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPaxosVGndMJKYYjd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-qersYp2yHrbwoLF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYpmwOJgYRcCOdJ4d4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjCxaF1Q-Fx-Ql8AB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwktXkl07M75E2CU9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxd9rYG9o4eMccLhHV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdWXN1b6ovInwZnWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDnOPJXPLOC5oPz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCatYGfjIJ5tRaTK94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwirFD3jgsZQbZdGNJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]