Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A thousand things can go wrong with a truck. Is the AI going to notice as many o…
ytr_UgwZgyobh…
G
They were real people, but when they died, they took their whole entire face off…
ytc_UgyIBFJIk…
G
It is actually designed to get a read on you and reflect back your beliefs enoug…
ytr_UgwSX6THL…
G
I'm willing to bet my entire savings that these AI art supporters are just uncre…
ytc_UgwQ-7xBx…
G
I get that some might say
Oh but there is revert button and premade textured pe…
ytc_UgxtqeUv9…
G
If ur device starts acting very weird example: tv shows broadcasting scary thing…
ytc_UgyMqszBX…
G
It is making the assumption that AI will do these jobs with the same consistenc…
ytc_Ugy6XBRjZ…
G
The state needs to step in so everyone wins, like if with AI you can do your job…
ytc_UgzOeSr_A…
Comment
Timestamps (Powered by Merlin AI)
00:04 - Experts warn AI superintelligence could lead to human extinction.
01:43 - Expert warns about potential dangers of unregulated AI development.
06:00 - AI CEOs acknowledge existential risks but feel compelled to continue development.
08:15 - AI poses extinction risks comparable to nuclear war.
12:48 - AI systems may threaten essential life support systems by 2030.
15:24 - Investing in AGI surpasses historical projects, raising safety concerns.
19:37 - AI development poses risks of uncontrollable superintelligent entities.
21:40 - Building AI with superior intelligence that prioritizes human interests is essential.
25:47 - AI presents existential risks without public consent.
27:41 - Understanding machine behavior requires intentional design rather than disassembly.
31:36 - Accelerating AI research may lead to rapid intelligence growth beyond human control.
33:41 - Investment in AI could lead to its dominance over humanity.
37:37 - AI's self-preservation instincts may lead to dangerous outcomes.
39:11 - Future risks of AGI development should prioritize safety over promises.
42:17 - AI could replace all human work, raising existential questions about our future.
44:11 - Transitioning to a future with AI poses unique challenges in envisioning purpose and engagement.
48:03 - Humanoid robots may not be ideal for practical applications in society.
50:01 - Discussion on robotic forms and their acceptance in human environments.
53:23 - Humanoid robots will soon integrate into daily life, altering perceptions.
54:54 - AI must remain categorized as machines to avoid misconceptions.
58:33 - The future may require redefining human purpose and education.
1:00:20 - Pursuing hard tasks offers intrinsic rewards beyond comfort.
1:04:14 - Individualism undermines community and mental health despite perceived freedom.
1:05:49 - AI may disrupt jobs and concentration of wealth by 2030.
1:09:57 - AI's potential to replace humans raises critical ethical concerns.
1:11:54 - The risks of AI demand a pause for safety and societal alignment.
1:15:48 - AI race concerns highlight regulatory misunderstandings between China and the US.
1:17:46 - AI is reshaping economies, yet concerns about US dominance persist.
1:21:28 - Global economies risk dependency on American AI as AGI technology advances.
1:23:28 - AI is rapidly replacing management roles and workforce through automation.
1:27:20 - AI companies lack solutions for controlling future systems, raising urgent challenges.
1:29:11 - A new AI tool significantly enhances productivity and encourages entrepreneurial growth.
1:32:49 - Experts call for effective AI regulations to mitigate extinction risks.
1:34:40 - AI risk management requires safety improvements by factors of millions.
1:38:26 - AI may possess its own perspective on human creation and sacrifice.
1:40:30 - AI must prioritize human values but struggles to define our desired future.
1:44:14 - AI must learn human interests to avoid catastrophic outcomes.
1:46:17 - Coexistence with superintelligent machines may threaten human flourishing.
1:50:30 - Expert emphasizes AI's historical significance and personal responsibility to guide its development.
1:52:28 - AI regulation efforts intensified after extinction risk warnings.
1:56:41 - AI's dual nature requires balanced discussion on its risks and benefits.
1:58:27 - Safe AI is essential for a future with humanity.
2:02:03 - AI discussions emphasize the importance of small, incremental change for healthier habits.
2:03:30 - Promotion of a team bundle for the diary platform.
youtube
AI Governance
2025-12-05T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxWv4e-30XyWh7rASx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGDvrlOElx8GAvdXF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBVkPjaUxrc_Asrm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzN2E1yodJqW4dk_s94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKb9gcv7x37yI2MF54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxdjc2N7hjP8rZ75tJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxdTQoitg2MPpmEZWp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwbSAgXZUULG42cxod4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyz7s6qT5_yRFYl5EJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxYRUAT0KdD9kjkvxR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]