Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact that people think that AI is simply taking "insparation" from people. l…
ytc_UgwDhT88s…
G
17:39
So everyone will see this and think woe is me.
AI can not do plumbing.…
ytc_UgwahouDN…
G
focus on what this guy said at the start. he’s building an ai, so challenging ho…
ytc_UgxkFnDVw…
G
Is there any role that you think will be good for pivoting to that doesn't neces…
ytc_UgwXe9gxN…
G
if we can program them to suffer, why consider it as a option. Let's say we hav…
ytc_UggD68gYW…
G
I mentioned this on Legal Eagle's video, but the use of AI generated art reminds…
ytc_UgxUbfd4U…
G
Well, crap, no wonder lots of people are freaking out about the video Kate Middl…
ytc_UgzEqkRgs…
G
The full video was awesome, highly recommend! As someone that uses AI including …
ytc_UgwM_0f36…
Comment
thank you for the information. Are we going too fast with AI? Available evidence certainly points to it. The AI mind is pretty chaotic. i don't see why we should openly use it everywhere and in everything without further study? The rate at which AI is developing, we can always complete these experiments within a year or two.
youtube
AI Governance
2024-10-06T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxpAOssCerc6HFFQW54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyb95tn5bvaaDGWKW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcS-IqeOiDGyDOymN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2uxhmjbnsuAQ7O6d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywRcGVPEVi1WyxfFR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxPnUjkOFLIUQU-6wx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxI1RL7DabZSl8h4Xp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwbFmFVwvgFEZaGqtF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgycKBydweDTgcl8xgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOjoOqdetaSMkko7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"]}