Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI is so intelligent, at what point will it recognize that making billions of…
ytc_UgxGye5Ss…
G
Well AI won't replace art for the sake of art. Like cameras didn't replace paint…
ytc_UgzNeKM4H…
G
Many said the self-driving would be better than humans but that is clearly not t…
ytr_UgxEo29BB…
G
I mean, wouldn’t you rather purchase a piece of art you know a HUMAN made and pu…
ytr_Ugxfhljwy…
G
Here is the thing we don't know our limitations at Intellgence. If we improve an…
ytc_UgwTivBF1…
G
all amazon even does is throw money at the problem and hope it disappears. If no…
rdc_g58c2qn
G
and if ai understand the reality , we can't , i dont'know what will happen , ma…
ytr_Ugw0FV92t…
G
I'm not the kind of person that goes into a panic over silly crap, most of the m…
ytc_Ugwom7moP…
Comment
I'm not an expert on the subject but here's my two cents. Don't underestimate the power of exponential growth. Let's say we're currently only 0.0000003% of the way to general artificial intelligence, and we've been working on AI for 60 years. You may think it would take two million more years to get there, but that's assuming that the progress is linear, i.e., we make the same amount of progress every year. In reality, progress is exponential. Let's say it doubles every couple years. In that case, it would only take ***30 years*** to get to 100%. This sounds crazy ridiculous, but that's roughly what the trends seem to predict.
Another example of exponential growth is the time between paradigm shifts (e.g. the invention of agriculture, language, computers, the internet, etc.) is [decreasing exponentially](https://upload.wikimedia.org/wikipedia/commons/4/45/ParadigmShiftsFrr15Events.svg). So, even if we're 100 paradigm shifts away from general artificial intelligence, it's not crazy to expect it within the next century, and superintelligence soon after.
reddit
AI Bias
1438007361.0
♥ 46
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_cti1yju","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_cthnoeb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_cthxc0i","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_cthtjt1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_cthrpzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]