Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I used Chat GPT to ask what it thinks of the situation, here are the responses Me - Recently there has been a lot of backlash from visual artists about their work being used as a data set in training in AI imaging models without their consent. They call this stealing. I'm curious to know what you think is right. ChatGPT - It is important for visual artists to be compensated for their work and to have control over how their work is used. Using the work of visual artists as a data set in training AI imaging models without their consent or compensation could be considered unethical and may be considered stealing, depending on the specific circumstances and relevant laws and regulations. It is important for organizations using visual art in this way to obtain the necessary permissions and to respect the rights of the artists. This can help ensure that artists are fairly compensated for their work and that their rights are respected. Me - If I'm an artist whose work has been used without consent, how should I go about it ChatGPT - If you are an artist whose work has been used without your consent, you may want to consider taking legal action to protect your rights and seek compensation for any damages or losses you may have suffered as a result of the unauthorized use of your work. You may also want to consider contacting organizations such as the Artists Rights Society (ARS) or the Artists' Legal Advice Service (ALAS) for guidance and support in pursuing your legal rights. Additionally, you may want to consider reaching out to the company or individual who used your work without your consent to try to resolve the issue amicably. If this is not possible, you may want to consider making a public statement or joining with other artists to raise awareness of the issue and advocate for better protection of artists' rights.
youtube Viral AI Reaction 2022-12-26T05:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzQ_TYQtNHSsd1HgKp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxdsLyGcE6loIQRx6x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx-4cjeJJSNuRkGMRJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeCDmbTR37wV8eJxJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxRgsxHhHkXbwdbdv14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz53MBzzVPshHn3RYV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxaJQTZ9DEcvfmNghJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugw32KZC2FFUlmmubPd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw1RwJSyLeQnTCnrUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxNlsBShGsEVeM_MPh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"} ]