Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whoever is screening the facial recognition better be prepared to fall in love w…
rdc_exg07z0
G
I’m currently working an internship as a writer where I’ve been tasked to use Ch…
ytc_UgyaswJZc…
G
SO, May i ask a question, CAN WE FUCKING STOP AI DEVELOPMENT PLSSS, I RATHER HAV…
ytc_Ugwhtm_dn…
G
Ai "art" Is no more than defussion of promts and the text you write (i can say s…
ytr_UgyC6J9tc…
G
I watch all of this content and I am so bored about the never ending mystificati…
ytc_UgxYLDmuB…
G
When talking about this kind of subject, I am always reminded of a science ficti…
ytc_UgiQOIvme…
G
8:24 I feel like ai doesn't just take their art in those specific grainy picture…
ytc_Ugy6IUuNf…
G
Also if you use image generator AI, it will only generate white people unless yo…
ytc_Ugx6b6-1J…
Comment
Perfect video, just keep talking about AI. I think that people all over the world would agree to regulate AI just like they agreed on nuclear bombs. First, a big catastrophe has to happen, and only then will we come to an agreement. That's how we humans are—first something dramatic happens, and then we react. Watch, and you will see that this is the case.
youtube
AI Governance
2026-03-22T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzoa8YE825Mk4s3vxF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzn6uCawICpUh9E3RN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxrV8pDbf469Dvibl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygGKFZSnKIjWIPBpx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxLhAf8XUWTl94s6cF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzid7E66aGQ_tFc1eV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxN_NdLhvWmU0WokXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6Ff7rvN1idpqqzMd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzcQlCMTKZt3rIHIah4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyl0xLhO4nfNPnJL1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]