Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don’t forget about the father who suffered a mental breakdown after talking to a…
ytc_Ugyj4adA2…
G
A hint, Ai cant draw hands.
even Ai with that super detailed and perfectly rend…
ytc_UgxZb6vNC…
G
Sign painters were all the rage before the 1980s. Once the vinyl plotter came in…
ytc_Ugz4C04Sc…
G
There already is good AI art, that's not the problem. The problem is AI competin…
ytc_Ugz2ByIin…
G
Step 1: Engineering trains and creates AI.
Step 2: Companies fire Engineers and …
ytc_UgwwYgx4W…
G
Dont think any work of art bare the very first artist could ever claim to not be…
ytr_UgxydbfPe…
G
Ask it how long it stays conscious after its battery/plug is pulled. See us huma…
ytc_Ugxp1hsZj…
G
Humanity is very getting scary with all these ai and technologies.
We need to do…
ytc_Ugzv332W7…
Comment
I have zero background in anything so this is just my own fledgling thoughts into the matter to test the waters of how well I understand this.
1. How do we test the intelligence of humans?
2. This is the one I'm interested it. AI have no reason to program the way we do, so to say it even could adjust our initial source code is a stretch. It may be able to replicate the functions and create another AI like it using it's own methodology. The reason I say this is a program doesn't know all the rules about programing we do, specifically that fundamental programing relies on boolean states of on-off or 0-1. Take [this](http://www.damninteresting.com/on-the-origin-of-circuits/) article for instance. A program was given the chance to write a configuration file for Field-Programmable Gate Array chip and ended up abusing flaws in the specific chip to accomplish it's goal because it didn't know any better. A self programing AI would probably do something similar in that it wouldn't be able to read or make sense of our programming and we wouldn't understand theirs. That said, it'd have to replicate itself first and in doing so it would have full access to remove programing and features.
3. Why would it? Self-preservation is a evolutionary imperative due to our deaths being permanent. Early injuries would usually lead to death so harm is generally avoided. An AI might even self-terminate when it feels it no longer matters. Unless the digital equivalent of addiction existed for it to constantly seek out.
4. If you can give an AI a bit of information and that AI can formulate an estimate of what percentage that bit of info represents to the whole (even if it's wrong) it shows that it's aware of a situation larger than what it currently has knowledge for. (It understands the concept of questions to ask based on questions it has answers to).
5. See 3.
6. Not my question to answer.
reddit
AI Bias
1438044149.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_ctho8la","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_cths3wu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_cthvmen","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ctifmmc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ctho3gc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]