Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>I can’t believe it’s still as popular as it is.
Because it isn't, just like…
rdc_nczd524
G
@sh3613The underlying ethical framework that came from us? I don't understand w…
ytr_UgxKGWBEH…
G
All those who afraid of AI, I have a question. Why would AI want to get rid of h…
ytc_UgwFGalfN…
G
you have to do AI because "they" will definitely do it. im for the robots taking…
ytc_UgzTd3nOH…
G
Trolls targeted Tay immediately
Within minutes of launch, coordinated groups on …
ytc_UgycL0cOR…
G
@stars_and_tarot_YT yup. A physical job skynet wont be able to do until they get…
ytr_UgyxvV7uQ…
G
Much of the creativity, and most of the skill involved. And the skill which is …
ytr_UgySV7k9v…
G
I dont remember artists fighting for truckers when replacing them with ai was on…
ytc_UgxLQTAe3…
Comment
It's not me saying it, it's AI:
GPT:
In conclusion, replacing "all jobs" with a global AI system remains highly improbable in the short or medium term. The potential disappearance of human labor remains a distant scenario: without a major breakthrough, we are far from it even after 2050. Summary: Given the current state, the physical and economic limitations, and the incremental progress in AI, a true AI system remains hypothetical. Our resources (energy, equipment, know-how) impose a strict ceiling. In other words, we are not yet close to a world where AI completely replaces human labor.
Grok:
Integrated Conclusion and Timeline Projection
Integrating all factors—technological, energy, infrastructure, financial, and societal—AGI appears feasible through sustained innovation, but constraints like energy and scaling impose delays. A 10-20 year horizon (2035-2045) balances progress with realism, allowing for efficiencies (e.g., CPU-based AGI) and breakthroughs. This estimate acknowledges uncertainties, such as potential paradigm shifts accelerating timelines or unresolved barriers extending them.
youtube
AI Governance
2025-12-24T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxdp1UFlLOtC6t3ZE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxiw6UTpTTjhT28dWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx30prUrKm2LF05I1N4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpJd4IJ-K2rGzS4e14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxMcX5VEy1cjFdbheR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7wrfMbK4zqPTR4Zh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRbnxTSyyxE6Mz1Jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxcTXRAvWFTGAp-b_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrUocKifAnwfrA9IV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy3Tz0QJJDvCjxfbOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]