Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It can be Meta, Google or OpenAI. Why is the host unwilling to out such a tragic…
ytc_Ugy22mvqs…
G
"I know one of the main players in AI who secretly want to bring about a dystopi…
ytc_Ugx-L2kjr…
G
Do you believe Bitcoin is safe? If you do, then AI is no better than humans at c…
ytc_Ugxijyw2J…
G
What should be pointed out about the comic book example is that the U.S. Copyrig…
ytc_UgxaZT3do…
G
AI should not be tuned to social engineering, you will see how quickly alt cultu…
ytc_UgydNlxOQ…
G
There is nothing in this world can function without academic knowledge don't wor…
ytc_Ugxe-P8D8…
G
How is AI amplifying creativity when a real human artist's creativity is not bei…
ytr_UgwCZhd-_…
G
There must be laws that AI-generated content must be marked and must also be imm…
ytr_Ugyam4NBG…
Comment
Thank you so much for all these intellectual and thought provoking subjects that many would regard as fringe or even sci-fi bonkers. But that is far from the case, and this episode is no exception.
AI is already far ahead of human capabilities we just haven't realised it yet. Those shouting warnings about AI are much like Oppenheimer back at the dawn of nuclear when he expressed his concerns at the shear power that could very well end humanity. And it almost has several times over the decades, now imagine that human control element gone and replaced with an AI.
Yes people Skynet terminators time travel killing robots all possible with an AI smarter than the entire human race, doesn't bare thinking about in regards to winability.
Only a handful of individuals across the globe have the power to end humanity with disregard for any safety when it comes to AI. AI is not to be messed with. Nor could we ever truly trust it.
youtube
AI Governance
2023-07-07T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4Z6MGm6XltSY-7hd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3LrSDv1jJol-oMON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwvW6bCajxg6y3xNxB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz1TC1hkXnRSB35mHZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyHIRMErY1OPwDkvB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4XaDGeT6trJLnIxR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxRfrfYd9b_Ix9Arnt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfXpoZkBEORLDPoPN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyVAMC0X0eemsmrcYR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyGbFiwh8TAfUKfNq54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]