Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What really worries me the most is what if there’s a hacker and they program you…
ytc_UgytHqr5H…
G
Every time I see one of you AI stans in the comments, you always prove the thesi…
ytr_Ugy8V9gZT…
G
Yeah well unfortunately companies now require you to use AI tools. You can't com…
ytc_UgzHSV0DI…
G
When creating anything how you test it and the knowledge you have at the time of…
ytc_UgyDZIMb9…
G
And now they’re trying to put self driving 18 wheelers on the road somebody make…
ytc_UgwMk2TJq…
G
If it’s being used in an industry (random example could be Disney or something) …
ytr_UgygR4-0C…
G
@IgnisKhanit's not sentient. How does it roleplay hacking nuclear arsenals or …
ytr_UgzsFSPFi…
G
I work at FedEx and drive a 1100 box truck 😂 they can automate the trucks action…
ytc_UgxgJI0dq…
Comment
Most people have this crazy idea that AI models will asymptotically reach human skill level but won’t completely match it any time soon, because the brain is crazy complex and we don’t even understand it well. My feel is that the general public thinks AGI is 10-20 years away, or will never be universally reached. In reality it’s probably 3-7 years away.
In reality, those model improve 20+ IQ points per year (though uneven), and there is no stopping in sight. They will just shoot past human intelligence level. There won’t be any “AI is gonna help me with my work”. Yeah... maybe in 2028 it will help you with your work, but in 2029 it’s gonna be much better than you across the board and your sheer presence becomes a liability.
reddit
AI Moral Status
1765315886.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt6k0lj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt75n3n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_ntaea11","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nt7448h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_nt799bq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]