Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is anyone going to mention that apparently all the major AI CEO’s have Fort Wort…
ytc_UgxQFIYKK…
G
Who defines improvement and what makes us believe AI is an improvement? Why do w…
ytc_UghF9MR_Z…
G
What can you do in the face of AI on social media? Get off social media.…
ytc_UgwfcF0DL…
G
I am the one who just rp with the ai, and if I break the filter, I'm out and go …
ytc_UgwEaVne3…
G
This video must be shared to all media in the world, all leaders in the world an…
ytc_UgyEimnij…
G
I’m not an artist, but I can imagine how much time, passion and effort real arti…
ytr_Ugxoxii6n…
G
This is freakyn totaly lie about ai i document my self about ,yes it is danger f…
ytc_UgyCfd4D0…
G
I'm actually an AI Transition Consultant and I go into companies and basically s…
ytc_Ugwu0lCXK…
Comment
I've held very similar views for almost a decade. I'm more pessimistic though. We're deluded if we think we can control something smarter and faster than us. We're deluded if we think safety will be built in through policy. The AI arms race will be driven by fear (of being out-paced) and greed. The 'rules' will be ignored. The ONLY hope we have is, as Geoffrey stated, that AI is taught 'from a young age' that humans should not be harmed.
youtube
AI Governance
2025-06-21T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxf9IqS3bkx7BInt6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSwnKHpzGfFsXcoRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyle81WqSsyT0BCn_Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCCi3Jr2n0tbVfxhB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOk29U43SHnz1zyLV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzVYOlRpBW2xQJTaF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4m84nJ-jaQPVEm1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTAiPvtaFFj5-ny1d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgysyVSll-auLoBMZDV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4KUIlSAabt9nlg054AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]