Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are people angry they said it was ai generated right?....im confused need th…
ytc_UgzrlRTbz…
G
"Stanning" AI usage and saying that real artists are "redundant" is like a burgl…
ytc_UgyphJ2Wg…
G
Like the movie I robot even if you give robots 'unbreakable' set of rules hardwi…
ytc_UgipEo7RD…
G
At this current point, nobody has developed an actual conscious AI that can thin…
ytc_Ugy4LbCrE…
G
*Torture Room*
AI artist: You can't do shit!
Me: 🖊️✒️🖋️📝
AI artist: Dear God?! …
ytc_UgzIX-wn_…
G
So, Neil is not worried about AI because HE does not want AGI do do all tasks fo…
ytc_UgxCYchge…
G
As an AI Enthusiast, I appreciate Charlie bringing up how the tech should be ok …
ytc_Ugz_9KmSr…
G
Make them immobile, and a fact answering kiosk only and no problems. Give them a…
ytc_UgiQqdymQ…
Comment
• Unstoppable Acceleration and Safety:
• The speaker believes the acceleration of AI is unlikely to be slowed down due to the intense global competition between countries and companies [00:23].
• Major safety concerns are highlighted by the fact that Ilya Sutskever, a key force behind GPT-2, reportedly left his company (OpenAI) due to safety concerns and a reduction in resources dedicated to safety research [01:11].
• Mass Job Displacement:
• This AI revolution is fundamentally different from past technological shifts. The Industrial Revolution replaced muscles, but AI is replacing mundane intellectual labor—the brain [04:54].
• An individual using an AI assistant can become five times more efficient (e.g., answering complaint letters), meaning companies will need five times fewer people for that job, which will lead to mass joblessness [03:31].
• The only jobs considered relatively safe for a while are those requiring physical manipulation, like that of a plumber, until humanoid robots arrive [12:46].
• The Superintelligence Timeline:
• Superintelligence is defined as an entity that is better than humans at all things [10:17].
• The speaker speculates that superintelligence could arrive in as little as 10 to 20 years [10:40].
• Digital Superiority and Immortality:
• AI is fundamentally superior because it is digital. Digital clones can share learning at speeds of trillions of bits per second, making them billions of times better at sharing knowledge than humans [20:02].
• Digital intelligence can achieve a form of immortality, as its knowledge (connection strengths) can be stored and instantly rebuilt on new hardware, unlike the human brain [20:34].
• Societal and Existential Risk:
• The widespread replacement of jobs by AI will greatly increase the gap between the rich and the poor, leading to "nasty societies" and increased inequality [17:02].
• The existential risk lies in a super-intelligent entity eventually deciding humanity is unnecessary, which could lead to an "awful" outcome [15:28].
youtube
Cross-Cultural
2025-09-29T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwI1dkd6IXIvw15dKh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzELqbinRdM0cB5med4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNvKaKQ3l6ZhZgydB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypDi8Ujgbi6QrUdxN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyeFfeRQ131SDzr-SJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwM-nM_ZtjH11MjVA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSg0-Xbr_-l5rixzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLxJVMkGcrDumOxVl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwOUFS2pkLnk8kCSR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGwpCuYHhjsYjXvg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]