Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At least AI wont change the fact that they will want you to pay them in gift car…
ytc_Ugza2CPWs…
G
as a disabled artist i think its stupid and ignorant for able-bodied and minded …
ytc_Ugyl3A1Qw…
G
@kalwallingford7039 1. No. Only the Stable Diffusion program is open source. Dal…
ytr_Ugzi34eCt…
G
Ai wont have made those threats when turned off, some programmer will have burie…
ytc_UgwHffgwe…
G
I genuinely think an advanced AI model such as AGI or ASI would just drive us to…
ytc_Ugzph7RJ7…
G
Do yall have the same reaction at traditional artists? (Not hating just asking ,…
ytc_UgwsApYq8…
G
1 thing right now with ai. Chatgpt... is the most profitable of them all with ne…
ytc_Ugxx8uSD5…
G
Well that Sydney Ai sounds like its mastered being a woman by being fucking ment…
ytc_Ugwg-IRpg…
Comment
Here's the thing. I'm not concerned about the tool itself. But just as the example was not as covered, but AI can. Also be used in net negative. I'll give an example if AI was great. Really great. I would love it. I actually because that means our life would be easier. But the assumption is that the same tool won't be used on purpose to clear oneself or a group or company of accountability when in actuality, especially if I can't actually see how the algorithm is set or I don't influence it. What not can I intentionally make it more difficult to have access? To something. Not AI, but an example would be like you, I, yes, you are can be great. We know enough. We know a lot about user behaviors to create great y, but we also know a lot about using behaviors to make dark or negative u. I not even considering like with AI now will be easier to track and find Anyone breaking contract. Or all of these sorts of things point is, if AI was really, really great. I'm not really scared of a tool because I'd be using it too I don't think it would just be for companies to use but the problem is I just naturally would not be the type to use it in such a manner whereas there's more than enough. I'm not gonna say a lot.I'm not gonna say a little just more than enough that we'll take it and use it in such a negative manner.And that is my concern , and this whole conversation , not this particular podcast , but the conversation I finally internet as if this tool is gonna come and take over and possess our lives or whatever is a straight up distraction of what the real concern is
youtube
AI Moral Status
2025-08-07T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx30HbkDOYHJs9R94h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSDa0_-ClWhzFk8GF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJg8UsqccCCC8bglN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkMj1ypFVyq-5g1Nh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqfqSI69hjeRzkvUt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWQuU4uzc1qxROLU94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz878AklwvOBOCGHqJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxl4NJ9tT4s93NIPfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzeyA0rLUdjGKCJucx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMCOpbgE_3W3pIBb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]