Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
IDK man if I had an AI clone of myself to attend work meetings I’d be a lot happ…
rdc_oh20rm1
G
last saw it was illegal to have zero driver for driverless and was required by …
ytc_UgxFcU_91…
G
The 1990s moral panic around the internet was a period of widespread anxiety and…
ytc_UgwN3yhKf…
G
To be fair to chatGPT, there are ways to make it create the arguments for God, a…
ytc_Ugypi86le…
G
I mean a core part of ChatGPT is that it lies to you, it just doesn't know it is…
ytc_UgxWpKAx1…
G
Trump aside, there can be problems with regulation. Such as ai being programmed…
ytc_UgwiZnno_…
G
Dude, if someone who works in artificial intelligence tells you AI needs regulat…
ytc_UgwMD5JqV…
G
If bosses will understand that A.I. is a tool and not a replacement, it will sav…
ytc_UgzAgftYt…
Comment
1:00:35 -thank you! I always get into discussion with people who think they don't have to worry about AI (e.g. programmers), because AI cannot come up with something new. But since I knew of Alpha Zero (the chess engine that only learned from playing against itself), I was convinced, that this is too naive. There is a limit of truly new things in the universe. But combining two things two something new is also an act of creativity. And unlike us, AI could even just combine billions of random things and find the 0.01% of things that actually are useful. And clearly, as Geoffrey Hinton pointed out with the amount of knowledge and calculation power, the more knowledge you can combine, just by pure chance at some point something good comes out. And I did not even think of the size of an AI's association network. That's truly terrifying.
youtube
AI Governance
2025-07-21T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZmZVLzPKSHYdj_3B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsP8V7xEzbGI4oNAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrDd6MA9XD5JxfG6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyItGTyvstz8J-A74l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytDp18mJgjbSCbg2V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxLOXl7cIrlRzX1Z614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRgDTReIsEpa5ujxt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9omvue6fyzc_TMoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"},
{"id":"ytc_UgwE2i83MncddjfncSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwTNefw0msQmBaJDz94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]