Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@nicormoreno While styles aren't considered copyright or anything like that and imitating a style isn't wrong (regardless of what some tik tok users may think) the way Ai was trained was 100% unethical and the way it uses those images are very different than how a person would, one is with intent and understand and the other is just doing what it assumes is the correct choice. The point overall is the majority of people didn't agreed to have their art used to train Ai which a lot of people view as a replacement to them. Where if asked I think 90% of the art community would have said no. Thing is large tech companies do not care how they get that data ethically or not especially with revolutionary/extremely innovative technology which has been in sci-fi movies for decades and will bring them trillions in revenue. Tech companies set aside large chunks of cash to pay the lawyers if and when lawsuits come and the little guy likely isn't going to win. It's the sad reality of the world we live in but tech companies either ask and if you agree take, or ask you disagree and they take there is no option a tech company goes "well they said so guess we can't" this has been shown for decades they do no care about you or your feelings but about the profit to be made Ai already has 100+ billion sunken into research for it so they expect at least 2 - 3x that and whatever else is spent on it to be returned. It's why I don't understand why people think Tech companies will suddenly be ethical like lets be real the device you and I are on 100% had some child labour and forced labour involved in making it. Ai is a marketing term that ironically movies made us all aware of far before it was ever a thing to even look at seriously and now is the biggest buzzword to overcharge a bad product and profit off massively
youtube 2024-07-16T06:4… ♥ 11
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugy4OtmZ9_Y1vzWBF414AaABAg.A5vjlV_zHHmAFed3TNgrxl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwsF_CxY2y7BhElGKZ4AaABAg.A5vhbPZtfOJA5zY1FrL8xx","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgxBhToXCxm7z_U5w-54AaABAg.A5vfej9TKQMA5ytjSeXpiI","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgxBhToXCxm7z_U5w-54AaABAg.A5vfej9TKQMA6-LRRzQBQI","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytr_UgxBhToXCxm7z_U5w-54AaABAg.A5vfej9TKQMA64z39EHQt0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwK9oDgbSUDXamjswB4AaABAg.A5vTfTPlgHGA5wpq2Em5su","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwK9oDgbSUDXamjswB4AaABAg.A5vTfTPlgHGA6KSMFTZNRz","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgydzWuQ5BolQJ_AYKB4AaABAg.A5vL-GvtDXfA5vdDZxxOoW","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgydzWuQ5BolQJ_AYKB4AaABAg.A5vL-GvtDXfA5wG6GtC9Xh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytr_UgydzWuQ5BolQJ_AYKB4AaABAg.A5vL-GvtDXfA61Hsw9RvIX","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]