Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is just the ability to guess the most relevant word. When you feed these mode…
ytc_UgydraiAl…
G
Fake the guy is have a armor when smashing him and the camera change him as like…
ytc_Ugz-O20dG…
G
We need an AI Non-Proliferation Treaty NOW!
We're in the nuke the f out of Nev…
ytc_UgxL0iLOa…
G
It’s so amazing to have a voice that is so openly against AI, it gives me a lot …
ytc_UgwYSJQ17…
G
Better rephrase that: "A.I. will be smarter then us" . Humans will provide A.I. …
ytc_Ugx73ZYMk…
G
i'd rather talk to this AI than talk to the indians that I can't understand…
ytc_UgzOM3MKu…
G
times like this is where i really question Neil. yeah only 5000 people die every…
ytc_UgwCZ7IUK…
G
Imagine being so creatively bankrupt that writing 2-3 sentences into a box so a…
ytc_Ugzzej_Kc…
Comment
0:00 - Introduction to the discussion on AI fear.
0:30 - Discussion about the "coordinated effort" to make people afraid of AI.
3:07 - Explanation of why Chinese AI models are favored by some in the startup community.
4:57 - American companies' stance on AI regulation.
5:18 - The shift from AI "doomerism" to optimism in 2023-2024.
10:46 - Introduction of Sintra AI as a proactive AI helper.
11:50 - Discussion on "functional AGI" versus general intelligence.
16:01 - The broad applications of coding agents, even for personal health.
16:38 - The "Maltbook" social network where AI agents communicate.
17:52 - Significant improvements in AI model performance between October and December.
18:50 - The impact of AI on jobs, particularly for non-coders and knowledge workers.
28:02 - Promotion of Paleovalley protein sticks.
28:32 - Discussion about Universal Basic Income (UBI) and human need for contribution.
39:56 - Thoughts on the cost of energy and labor approaching zero and societal changes.
50:15 - How humans can thrive by creating products and services that benefit others.
50:46 - Advice for Gen Z on how to adapt and utilize AI.
52:48 - Practical steps to automate repetitive tasks using AI, including personal examples.
1:04:32 - The idea of society developing "antibodies" to harmful technologies like AI.
1:05:52 - Concerns about the information landscape and the spread of fake AI videos.
1:06:49 - The concept of narrative control and the "iron law of oligarchy."
1:11:47 - AI as both a centralizing and decentralizing force.
1:12:27 - The rise of entrepreneurship driven by AI and individuals making millions.
1:13:54 - The potential of AI to create a more informed information landscape.
1:15:18 - The debate on whether personal AIs can overcome centralized AI surveillance.
1:18:19 - Discussion of "The Sovereign Individual" and welfare states.
1:19:15 - Optimism about humanity's self-correcting nature and American innovation.
1:22:25 - The impact of treating productive elements of society poorly on national competitiveness.
1:23:10 - The speaker's fundamental belief in humanity's ability to fix problems.
1:23:32 - Counter-argument about the uneven distribution of intelligence and interest.
1:24:12 - The idea that intelligence is overrated.
1:29:49 - Belief that only a small percentage of adults will ever change.
1:30:46 - The importance of self-driven change.
1:30:55 - Discussion on free will and consciousness.
1:49:37 - Final thoughts on the goodness of people and society's ability to improve.
1:49:56 - Amjad Massad's social media and website.
1:51:00 - The long journey of Replit and the importance of perseverance.
youtube
AI Jobs
2026-02-25T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugws8lMnm980lhITBjx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxamFMpHUhXzgQte2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNjJyml68EQYUupGp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwqJBCOt2prTqEkbNp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq8im9k4ouCpJC7qB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxE4dSvR3imOx9dgJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8ZNxzh68hr2q8HEh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwpCz8JTpxighDWS954AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxabYiw4yBKIjQkjIF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwy-XxrculMnJW3Kgt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]