Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Feeding LLMs the entire internet.💀
This channel's entire production being dedic…
ytc_UgxokhS6M…
G
@ That is fair and I respect that.
However if you were unable to tell with some…
ytr_UgyqMKDD4…
G
And that’s what makes them being conscious an impossibility, a human can do or t…
ytr_UgytBcKa3…
G
If you think about it govt will loose tax on 100 million jobs that will be repla…
ytc_UgwyV7OFl…
G
I'm a freelance illustrator, my role is exactly that of the AI: I interpret a pr…
ytc_Ugxeewzol…
G
Camera's 100% "create" the art, you just need to point them at something. If the…
ytc_UgwEG_MPX…
G
@AdamBA380 it really isn't, most AI models are made in by people and release for…
ytr_UgwJIGJJQ…
G
It's not a matter of stalling ai. It's impossible. Rather society must be change…
ytr_UgwP0yIqq…
Comment
I feel like the more pertinent question is: are we making systems that are in some way sentient (having an experience, pain, pleasure, etc) or even conscious (self-aware)?
Because if so, that to me is kind of a larger ethical issue to work through than anything you mentioned.
Something doesn't feel quite right about bringing another sentient, conscious, being into existence without it's consent and you can't really get it's consent before it exists so the best you could do is create it and then give it the option to kill itself if it didn't want to be created... but that's like, so fucked up lol.
For the same reason, I feel like it's kind of insane that so many people decide to have kids and think nothing of it like it's just some normal and completely unproblematic thing that's just expected of you, so you do it... but anyway the main point I'm getting at is: if you create a living being, with a sentient experience, and conscious awareness, that's a HUGE fucking responsibility you're taking on, you should not be doing something like that if the being you're creating is just going to suffer throughout it's whole life, so it's kind of your *job* to make sure it's comfortable, happy, has everything it needs, feels loved if it needs that, etc etc.
I don't think a lot of people are thinking very much about OUR impact on AI - only usually the other way around, how AI impacts us or might impact us in the future.
Hopefully the models and networks we create don't wind up as selfish and inconsiderate as we are.
youtube
AI Responsibility
2023-11-12T01:0…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzrwXdYpo3Uqoamt2l4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEn5L_-R1wxvVw7-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzTdXDGNGrZzSmfD0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCHzHKiYDUzVIA8z14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwY16aXC8SUFT9zzbd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6owrZ55fPHnd7dtJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz4YfgCE4I3EUjQHL14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHGVJvGzMZg_MWbyZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVJFWAMHvXwfW_CYV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgysfQ7k_ZckAFSixjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"mixed"}
]