Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The deep fake problem is impossible to solve tbh. Reminds me of how PH removed a lot of their videos because of child abuse and rape porn, but it's still all over the internet. All you can really do is stop the main players, but it won't stop the majority of the internet from making them. Porn not surprisingly is at the forefront of a lot of new technologies and is a huge driver. The moment we find a way to spot deepfakes another technology to fix that issue will appear. Awareness of the issue is important but it feels like a bit of a waste of time trying to stop it. AI is the solution to stop deep fakes, but also the solution to create deep fakes to fool even the previous AI. It's a revolving door. Interestingly enough Google believes open source is winning the AI war. They are people in the field such as Open Ai's Sam Altman who is pushing for regulation and standards in the field of AI. Which would be troubling for solo developers. It's a lot easier for a large corporation to follow guidelines and even find a way around them than a solo developer who is focused on only the product and might not have the resources to meet the requirements. We might be hitting an inflection point in technology soon, but it's too early to tell. Allowing only large corporations and the government to have access to AI seems a little bit scary. TLDR: Deepfakes can be mitigated on main platforms. They can't be stopped from being created unless we create standards in the AI field. Creating standards and regulations in the AI field might only benefit large corporations. I believe we have to take the good with the bad if we want technology to be accessible to everyone. Soon, we might rely on AI as much as we rely on our phones or the internet and it will become a major part of our daily lives.
youtube AI Harm Incident 2023-05-21T17:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugyvb8j5BA5qliRHm5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgynhR3CdmMYw0gqve94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugyf9-X_B1hJ3SkWE-l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy-VNzJyFiAzwkwzXR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsjzPh_fn78or1FdR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxurUDjJ_76GPwueaJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwXusqL7rvRL512U94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx_3NnE7V806gV9cat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyOo9lH3o2jYdNJK_x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugzd22DP_5SH8U3z5HJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}]