Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What he actually means is that Trump and his little cronies have messed up being…
ytc_UgyNcOH4w…
G
This resonates with me! Relying too much on AI like ChatGPT can really make us l…
ytc_UgyYbRzn9…
G
Its not that AI is being 'malicious ' in any of this , but what would be its ULT…
ytc_Ugx3Vs5F4…
G
There is an entire world of prestige TV that an AI could absolutely not handle. …
rdc_jirnbg2
G
Ethan Klein from the H3 Show is doing the same and is leading a class action sui…
ytc_Ugxo_FLJ8…
G
Bro that doesn't tell me anything new maybe I'm just a conspiracy theorist but h…
ytc_UgxyoUNOa…
G
I've had similar things happen to me because of all of the misinformation ai has…
ytr_UgzPBzEed…
G
3:15 was a game-changer... AI that gets math mistakes before you do! Just like O…
ytc_UgxR05YqV…
Comment
I think the prediction has several false assumptions - That there will only be 2 AIs, and that they will secretly work together.
But that's not true. There are thousands and thousands of open source AI models today, with hundreds of millions of downloads.
So in the future, there will be millions of different AIs running independently of each other. With no secret cooperation between them.
Also humans maybe be stupid in compared to an AI intelligence, but humans don't want to let go of power... Meaning even if the AI is smarter, humans will still want to be in control of AI... And we will want to have an off swich for all datacenters...
youtube
AI Governance
2025-08-02T10:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrvlogwJZB58hOB_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoUx87KXqIWbwN1Q14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGImwA63MuO5spN-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzrSXfQ9Bjmmnq2FQt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzrhff0H_Xx4-NH7Mh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMTt3LriXXuv4xDjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxddJo4Ept2MASBc114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsj7GVSNn7RBI2O614AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy1ISDQceYi74QCqG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2IUP-6IzJk8ppuxl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]