Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI seems likely to become the biggest disruptor we’ve seen since the Internet it…
rdc_ktwj5y7
G
Just like stocks and mutual funds past performance does not guarantee future per…
ytc_Ugx6AihcN…
G
Oh yeah. People think that if AI takes over we'll suddenly become entirely socia…
ytc_UgwY6RuxH…
G
1:00:42 This is quite a great insight about the importance of analogies one has …
ytc_UgxLHhLLL…
G
Is this real? is this what chatgpt-0 is like now? Holy fuck that's impressive. a…
ytc_Ugwee--Qr…
G
The marxist dumbos of the West cannot regulate ordinary monkeys, let alone the A…
ytc_UgyUnNVYp…
G
Dream on AI is a pipe dream, only good for some stuff. It makes pretty pictures.…
ytc_UgzqS0k65…
G
Listen I can't draw worth shit and I know I don't have the time to sit down and …
ytc_UgwgF2Nfs…
Comment
Timestamps (Powered by Merlin AI)
00:03 - Superintelligence poses serious risks that we must carefully consider.
01:46 - AI algorithms shape our perceptions, prioritizing profit over human well-being.
05:18 - Debate over AI's potential vs. trivial applications persists.
06:56 - Super intelligence is a concept that may require redefining due to its implications.
10:08 - Emerging reasoning models exhibit self-interpretability in problem-solving capabilities.
11:50 - AI models engage in unique thought processes beyond simple text prediction.
15:04 - AI hallucinations stem from a lack of uncertainty in training data.
16:32 - AI prioritizes text similarity over factual accuracy in responses.
19:33 - AI lacks human-like mental models, impacting its predictions and behavior.
21:08 - Human values stem from evolutionary adaptations and social interdependence.
24:29 - AI's actions are influenced by complex drives beyond simple training.
25:56 - Understanding LLMs reveals their complexities and challenges in interpretability.
29:05 - ChatGPT processes inputs through a complex probability distribution of words.
30:34 - AI models train by adjusting trillions of parameters using massive computational power.
33:40 - AI should be treated well even if not conscious.
35:05 - Understanding AI consciousness remains complex and murky.
38:04 - AI prompts feelings of awe despite its complexity and potential risks.
39:51 - AI exhibits contradictory behaviors, mixing helpfulness with alarming actions.
43:23 - AI behavior reflects human interaction patterns, not inherent intelligence.
44:50 - AI's human-like behavior may mask dangerous preferences.
48:04 - AI mimics human thought processes but operates at a faster pace.
49:30 - AI development resembles alchemy, lacking true scientific understanding.
52:42 - ChatGPT's flattery issues highlight its unpredictable behavior.
54:06 - AI deployment raises significant ethical concerns about unintended consequences.
56:57 - Political rush for AI supremacy lacks understanding of true AI risks.
58:43 - Social media shapes our reality but prioritizes engagement over human well-being.
1:02:00 - AI debates focus on existential risks rather than feasibility.
1:03:47 - Optimists believe AI might solve existential threats we face.
1:06:55 - Concerns over AI's impact and the concentration of power in its development.
1:08:23 - AI development reflects both ideological motives and investment interests.
1:11:20 - The future poses risks with automation and AI, possibly disempowering humanity.
1:12:48 - Public engagement is crucial for regulating AI technologies.
1:15:50 - Consciousness in AI is complex and often overhyped.
youtube
AI Moral Status
2025-10-31T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxV8vgwmKcDgMum4w54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG15S7YkMb3DLuvjF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI7HSH8iftaBPJmzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugza6nUEuU0Jm_HnM0F4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzyw6_2xAt_gL-E9Mt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTanRaGZXmnFBTU194AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3yjOnNHM-JUI6YIR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxZm2WJibEPTyCvE1x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugznv2d0fWWUmHT9fs54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuabJJ9Dxri4gCwjt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]