Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is essentially Technet in Terminator. The problem is that I don't think AI …
ytc_UgwrkyHux…
G
I don't get ai bros, really. No one claims they are chef for heating processed f…
ytc_UgwlIuoKZ…
G
Keep playing with AI and you'll get what you want. Elon already warn us about AI…
ytc_UgzQ0zKon…
G
The thing with that particular "hallucination" "how many strawberries in R" is i…
ytc_Ugxa4KLoV…
G
Potential Problem
AI can create its own totally different uninterpritable lang…
ytc_Ugw5L_l5c…
G
Why would AI function without a purpose? At 53:48. What would be the reason for …
ytc_Ugygb48AA…
G
Is this video AI or are you dumb? Why would Mark Zuckerberg refer to his product…
ytc_UgzwMJSD5…
G
She is right, thinking abour de REAL A.I.
like IBM s Q System 100 Trillion time…
ytc_UgxgVCIpW…
Comment
As of late, this channel appears to focus on topics that induce fear. Fear has a tendency to freeze us and I'd very much prefer to hear about how to steer our civilisation towards a future, where the advent of AI-Agents is seen as something amazing to embrace. Seriously, why are we afraid of AI? We are afraid, because we live in a capital driven world where we lose our legitimacy, if AI can do our job with a fraction of the cost. We simply will not be able to compete in a market environment like this in the long run. So why not think about a system, where we don't have to compete with AI? AI has the potential to set us free from having to work cognitively and physically in order to survive. And we will set AI to this task, no matter what - the invectives are just too lucrative to not do that. So if AI will eventually do everything better than us, why not use this platform with it's very interesting guests to co-create a vision of a world we actually enjoy living in and how to get there from the point we are at right now?
youtube
AI Governance
2025-09-05T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztjyV2mcFo1suxHRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzebN80q0_0g0XYQfZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOTzTjGmKjrsPBwwN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhmRLE2MNIULKdif54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugxe5ksYWO5joP4TtTh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyvrELk55DTiqRsfR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxW-mKdco7VAo9XqPF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyu7Srclkimq6NZfo14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwq2f-kONr1Uv79yCN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw2JRH0YU60JCz_41x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]