Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Make robot 🤖 kill human, think you so smart, but every thing will lost control 😊…
ytc_Ugzu8FpW7…
G
I'm not scared.
In my opinion as long as not everything is automated programmers…
ytc_Ugxuegxps…
G
The reason she doesn't look like a modern young woman: She doesn't have a tatto…
ytc_Ugxoa81_s…
G
She denies the AGI wildcard (correctly), but believes the “threat to democracy” …
ytc_Ugx9LSLda…
G
Here's my question though- if we automate everything, and nobody has jobs, who's…
ytc_UgxG4Wm5-…
G
AI would be better at code security analysis and performance within a few years.…
ytc_UgyD4ug12…
G
I'm a Young Artist, I make all the art for my non meme videos and I drew my pfp.…
ytc_UgwiXO1KX…
G
Just read a great book (More Everything Forever) about how deep the rabbit hole …
ytc_UgxRekLlk…
Comment
The Young Turks
too late..did you not know the navy has already tested ai ships? they succeed but ultimately failed in major aspects and are still funding and improving it...also there are different AI levels....such as the AI you see in video games in which the AI is completely restricted and can't go beyond its virtual space and is capable of only what is programmed.
second off every AI would be a killing thing even if it is for industry...if you were to restrict a AI to only its programming it is still probable that it will kill...there are articles on this one example uses paper clips..if you tell a AI to produce paper clips it will continuously produce paper clips...abstract programming is a problem because what happens if you get in the way of it making paper clips? the article suggest that the AI would than go mad and start to kill you because your in the way of it making paper clips..eventually its going to use up the resources to make the paper clips on this world and maybe go off eles where...if it is capable of that.
if you were to give the AI any sort of abstract command it will carry out that abstract command just maybe not the way you hoped it would....and military commands aren't always very straight forward...such as "move to these coordinates and take out this target"...well what if there is a village in the target zone? the AI is just going to carry out its command as you said the AI wouldn't care it would just do what it is told in anyway shape or form.
but that type of AI is still a far cry from what a actually AI is or can be like the Geth in Mass Effect (the way they are portrayed) or the Terminators
a AI can be as intelligent as some of the programs you see where there is a virtual thing/computer actually responding to what you type and learning from past and current conversations and improving its self. to as dumb as to well a video game AI..only capable of doing what it is programmed
or as complex and intelligent as i said above with the Geth from Mass Effect.
we already have a AI in our every day life....search engines in browsers they cross reference things pick up words based on keywords (what you type in the search box) and they are even more restricted than video game AIs
youtube
2015-07-31T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugh4Y0du4dZ53ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj9S7He4JB8SngCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugg35B6pH_UxTngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggYXi1L41uO13gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgixNw6ZI8gvlngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh6JtGH2uCypngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjKo0rMFtp5cHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghUF7sRctjnPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgiMWJhekJHrLHgCoAEC","responsibility":"government","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugjoe2j6EEqhTngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]