Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just right after campus this is what I tried, then realized there was no future.…
ytc_Ugwjz5vVB…
G
Loved this enterview. But I would have liked to hear you talk about the future f…
ytc_Ugyaerjl2…
G
Ask ai: do you want to give me oral pleasure?
Ai:would you like me to start now…
ytc_UgzmY8s7q…
G
I don’t have a problem with AI being used to debug but I feel like it should not…
ytc_UgyFiPsX0…
G
Look, if I use a calculator to solve a crazy hard math problem, I still did the …
ytc_Ugz2t6hyx…
G
AI just as the abbreviation that it is artificial intelligence. Anything artific…
ytc_Ugx1p3kt7…
G
They've emulated human drivers so well that even their AI counterpart can't yiel…
ytc_Ugy5c-j3V…
G
So long as we're dealing with AI that work purely off of "predict next token" th…
ytc_UgwfBqwoE…
Comment
To the "data scientist" ... it's not what you don't know that gets you into trouble, it's what you know for sure, that just ain't true. If you are a scientist, one of the first things you adopt as a discipline is to always check ... how do you know what you know and why do you believe what you believe ... a classic example of overconfidence bias, especially ironic coming from someone trained in data science, where uncertainty and probabilistic thinking are foundational.
1. Claiming Certainty About a Complex, Evolving System
“I have a Venn diagram that shows AI will never replace x% of people’s jobs.”
That statement alone is a red flag. The future of work, technology, and society involves countless variables—technological development, economic pressure, social adaptation, policy, etc. To say something “will never happen” is not scientific—it’s dogmatic. A data scientist should be the first to admit that models are probabilistic, not prophetic.
⸻
2. Misunderstanding AI’s Capabilities
“AI is only as good as human knowledge.”
False dichotomy. AI is not a static reflection of human knowledge—it can synthesize, optimize, and apply knowledge in ways that individual humans can’t scale. AI can now:
• Discover patterns in massive datasets faster than any human.
• Generate original content and code.
• Diagnose diseases and simulate chemical reactions.
• Design hardware and optimize logistics with less human input than ever before.
Even if AI began by mimicking human knowledge, its utility already exceeds typical human job performance in many areas—and it’s improving fast.
⸻
3. Downplaying a Reasonable Concern as “Fearmongering”
“You shouldn’t say this because it scares people.”
This is emotional reasoning, not analytical. The purpose of forecasting disruption isn’t to spread fear—it’s to prepare. If AI has even a 50% chance of displacing a massive chunk of the labor force, ignoring or ridiculing that possibility is irresponsible.
⸻
4. Forgetting What “Data Science” Is About
A good data scientist knows:
• All models are wrong; some are useful.
• We work with confidence intervals, not absolutes.
• Probabilities > Predictions when the domain is uncertain and dynamic.
Saying “AI will never replace x% of jobs” is not a scientific claim. It’s a belief, and a naive one at that.
youtube
2025-04-16T14:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHFthkqMh7YVrRSit4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTSbXeSJ7mbp4zfEx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxFSZ46CypDxUJ3Q4d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzU3rhFWORvLpLDUsd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyB3pN2B5WajUu-3yd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcCr3OxSEKL2NbLHp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwI6F23JsNePI1lKkF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-ragJX_fO44proel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzEjFT2Mv2OhKlGbUV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugylrstv-e-9xuHsAwl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]