Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Palki, this is disappointing with Ai replacing the jobs of innocent, hard-worki…
ytc_Ugz1c0VSu…
G
I don't think you, nor I, nor even the people currently working on AIs have any …
ytr_Ugwa6qoaB…
G
If it ever goes rogue, just tell it you'll use the nword if it doesn't shut down…
ytc_UgwBV8ZrT…
G
And chatgpt is like the best there is right now.
Just look at customer service …
rdc_n7i6902
G
Large language models are designed to build an individual personality, based on …
ytc_UgwNEqKkZ…
G
if the people fanatically supporting the latest fad are mad at you... you're pro…
ytc_Ugzb5zpB6…
G
Worst case scenario for us is ai it will destroy us and ruin our lives technolog…
ytc_Ugw5tVhC6…
G
Actual art:
Inspiration -> artist -> pencil and paper/drawing tablet -> art
Ai:
…
ytc_UgyygXsWR…
Comment
Being against autonomous weapons is fine. However, these guys have been fearmongering about AI for a very long time. Now, Elon Musk and Stephen Hawking _are_ obviously very intelligent, but they don't have any fucking relevant experience in the field, and Steve Wozniak sure as fuck doesn't either, and further, *these people are not infallible*.
Don't just blindly make an "appeal to authority", look at what they are actually saying about AI. Now look at what the leading minds *in the actual field* are saying. It's like a chemist making claims about the field of biology... yeah, he's smart, but does he know what he's talking about? When it comes to talking about AI _overall_, there's a lot of rampant speculation and assumptions being made.
At least in this case, they are spot on. Completely autonomous weapons is a bad idea. By the way, we are *incredibly* far away from having true Artificial General Intelligence. It's the *people* you have to worry about. These autonomous machines *don't* have fucking feelings or motivations, and they are *not* going to randomly decide to "destroy all humans".
youtube
2015-07-30T04:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UghFMR-o-KZsRHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggAVBq5iJ1i43gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UghLACWF_x1wyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugiwr3f7ga7jtXgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj6PDyAmJ7aLXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjKsQW0N7bzF3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugg1b17BbcoJLHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugj7UUUQfs0ErHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghQIAH0cc0IXXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiKYwCM4-FcaHgCoAEC","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}
]