Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t think that this is something we’re going to be able to prevent. I know that is disturbing and I know many will say that we can’t accept that as a truth and must do something to prevent it from happening but the reality is there is no way to stop it. There’s a whole new paradigm shift coming, I reality shift, where imagination is going to become reality. We may have to accept deepfakes as a manifestation of peoples imaginations. There may even be a law one day that says if you can imagine it in your mind an AI can replicate it then it’s first amendment protected. I’m not saying that this is what we should strive for. Obviously I don’t want deepfake pictures or videos of my own 17-year-old daughter, my wife, my sister, my mom, me, or anyone I know out in the world for people to look at. I’m just saying, as a computer programmer myself I’m trying to think forward to the end result of this tech and I just don’t see a way of stopping it. And when something becomes inevitable, eventually people make excuses for it, and move forward far enough into the future, and it becomes not only common place, but accepted. Maybe 20 years from now. But I just see that that is where this is headed unfortunately. For now we’re able to stand out these deep, fake fires as they pop-up. But as the technology becomes more and more available and easy to use, this issue is going to happen more and more frequently, and become harder and harder to find, control, prevent, and stop. Until eventually one day, they’ll be iPhone apps for it. Heck, they’re already are iPhone apps for it.
youtube AI Harm Incident 2023-11-12T23:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwqsx00XUw6nffvG6B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwbO4aQw6etlO5S_oJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFysglJsIIanhTFih4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyx1SLeJw6T8nYBx_t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzb4ct0KxS3xMW8TNp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYMS-RLX6uwiYhVuN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-f6AYWj3NFpvhDzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxU1FcjhXkUKVrK-TF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0RNhC00yQ_3WwPQZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyU03Miw8lHSkNvXJF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]