Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not me saying it, it's AI: GPT: In conclusion, replacing "all jobs" with a global AI system remains highly improbable in the short or medium term. The potential disappearance of human labor remains a distant scenario: without a major breakthrough, we are far from it even after 2050. Summary: Given the current state, the physical and economic limitations, and the incremental progress in AI, a true AI system remains hypothetical. Our resources (energy, equipment, know-how) impose a strict ceiling. In other words, we are not yet close to a world where AI completely replaces human labor. Grok: Integrated Conclusion and Timeline Projection Integrating all factors—technological, energy, infrastructure, financial, and societal—AGI appears feasible through sustained innovation, but constraints like energy and scaling impose delays. A 10-20 year horizon (2035-2045) balances progress with realism, allowing for efficiencies (e.g., CPU-based AGI) and breakthroughs. This estimate acknowledges uncertainties, such as potential paradigm shifts accelerating timelines or unresolved barriers extending them.
youtube AI Governance 2025-12-24T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxdp1UFlLOtC6t3ZE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxiw6UTpTTjhT28dWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx30prUrKm2LF05I1N4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxpJd4IJ-K2rGzS4e14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMcX5VEy1cjFdbheR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7wrfMbK4zqPTR4Zh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyRbnxTSyyxE6Mz1Jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcTXRAvWFTGAp-b_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyrUocKifAnwfrA9IV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3Tz0QJJDvCjxfbOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]