Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let's play devil's advocate, poisoning AI may currently still work (haven't done any research other than what you've said in the video, so I shall keep it at "may"), yet it will probably be fixed in the future. If they see a decrease in utilization due to poisoned data sets then they will find some way around it. Let's be frank, we're talking about artists against mega corperations, the bounties such companies will place is reflected by the loss of revenue. e.g. if they so much as lose 5% of their income due to it (and expect a ROI time of 6-12months) then the bounty will reflect roughly 9-18 days of pure profit. Which is not a lot, but these companies make a few billion worth of profit. So let's do some basic math, explaining variables below. expected profit margin of roughly 10% (this is not a lot for a tech company) income 2024 (open AI): 4.9B usd (https://sacra.com/c/openai/) 4.9B p.a. * 10% = 490m expected profit (usd) p.a. 490m * 5% = 24.5m expected loss (usd) p.a. so then based off of ROI time they wouldn't mind spending between 12.25m (6 months) - 24.5m (12 months) usd to fix this issue. This is 1 company... Now as for my stance on AI. I think that AI can be a helpful tool for humans, it can help us in day to day activities, e.g. **once** it becomes good enough. It could be used to mention certain laws at play in certain activities. Perhaps one day start helping with diagnosing patients. etc. For now it can mostly be used to summarize certain information that's less crucial to humans, e.g. excel's docs. YET art is something it will always suck at. A good story is relatable in some ways, a good piece of art displays emotion, hell, even more utilitarian art will be hard to be replaced. Art is more than an image, more than some text, art has a heart. And AI will simply never be able to recreate that.
youtube Viral AI Reaction 2025-04-02T22:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyBHgFB4287BjzEOQh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDbDlTlb1bt7Y3f5N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzZb7CM2szKHHz5Y6J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugweh0mldE_t8Vz_60h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxTmYcl-eKb47PA-1Z4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz22Who-M6YTRdLHhV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxfbOjcKdGf14UjRGx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFzT857ug0GsEIa5d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyE0p9h6U8kLlJjhK54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyTo6S-2pH4sYChbCZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"} ]