Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@nathanlieu6840 we could literally replace 80% of jobs in the next 10 years whic…
ytr_Ugwivdl0k…
G
The new profession will be Professional Flatterer - it needs to be by a human, r…
ytc_UgzKs8DjZ…
G
Good vid. I do the same often. The ai models seem to know the answers. Have had …
ytc_UgzsBakTh…
G
The dumbest thing a human could make, robots. There are already programs that le…
ytc_UgyUNRoXb…
G
Delete ai, use protected from adds devices and operational systems, attend confe…
ytr_UgzMD3H2T…
G
Ive worked on and with AI for years, it is not to be used for creating flat art.…
ytc_Ugx7OyT-2…
G
Brains have 100 Trilliion connecionts for many reasons. Direct comparison is not…
ytc_Ugyt5h-yn…
G
I think the only thing that is missing for AI consciousness right now is for the…
ytc_Ugz4RLijb…
Comment
My thoughts on artificial intelligence:
INFINITE possibilities. Literally. I would love to see it grow, with automatic level design, borderless dialogue, imaging of celestial and (theoretical) phenomena, and possibly even multi-track computer systems.
The issue is that all of these great things that we could do is just... not focused on. I know it is being worked on, there are AI-powered video games, chatbots, 3d adaptations (from satellite scans), etc. But image generation, while a great step in artificial intelligence, is a drastic violation of computer ethics. Reason why: it is TOO boundless. Think of all of the sci-fi media with androids. Dystopia included boundless AI that can think for itself, its self-preservation, and the riskiest quality to give ANYTHING with knowledge: the intent to dominate space. Utopia, however, had androids within very restrictive boundaries. Computer ethics are very different from human ethics. You can make a robot do a job for you with no pay, but you can't do that to a human. Likewise, you can give a human the ability to make art taking inspiration from other people's work, you can't give that to a computer. Why? Because computers cannot think for themselves. Computers do EXACTLY what they are told to do. They have no sentience, they have no thought process outside of analyzing bits, and this is both a great thing and a horrible thing. One one hand, a computer does exactly what it is told to do, without question. On the other hands, a computer does exactly what it is told to do, *WITHOUT QUESTION*.
Sorry for my ramble, but a software engineering major with artist friends makes for an interesting take on ai, in my opinion.
youtube
Viral AI Reaction
2024-11-02T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyF2rgsSWO_tC-vbot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxnHV883fpTQpQjLWJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznDZ4vBxafyMVyvOV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJHv1qT-DUM1vFhsp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2qDaDeY8v_IyTmTd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx3WYqsv6cNJnCJq6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxAQ9SO8aBzAeyNFeZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykCGPhEnsv6aOX6c94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgywzjjC-CPKI7ykiZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgypAWaBwoFqBkDmxMx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]