Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ngl when i look at Alexia Anders she reminds me of Stephanie Soo, sorry but it c…
ytc_UgzmX1KA8…
G
Sadly no matter what you may think or do technology will always move forward. Ev…
ytc_Ugzj3f9Rv…
G
Oh, capitalism and AI, definitely not trying to diminish the value of the workin…
ytc_UgxBbtJTl…
G
Robots dont pay taxes... You prob could make it safe but not economy safe Just s…
ytc_UgwK2Tkxm…
G
I stopped at the 33 second mark. that isn't a real danger a self driving car wil…
ytc_UggJ82QW9…
G
I think what he meant by the danger of AI is because the program built on AI can…
ytc_Ugzjeb5lH…
G
You know it, I know it, so Anthropic/OpenAI knows it too - you just have to look…
rdc_ne3bv30
G
Ty for this.. I want to become an artist in the future but because of ai I’m ret…
ytc_Ugx5WPmzB…
Comment
Ai is a godsend! Humans will only benefit from this technology,it will make life a whole lot better and more enjoyable
youtube
AI Governance
2024-10-29T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxpAOssCerc6HFFQW54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyb95tn5bvaaDGWKW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcS-IqeOiDGyDOymN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2uxhmjbnsuAQ7O6d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywRcGVPEVi1WyxfFR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxPnUjkOFLIUQU-6wx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxI1RL7DabZSl8h4Xp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwbFmFVwvgFEZaGqtF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgycKBydweDTgcl8xgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOjoOqdetaSMkko7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"]}