Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, AI should be regulated heavily and, somebody else should be in charge to advance AI, so it supports human-kind, planet. 5 psychopatic CEOs that are either creepy (Nvidia) or look like AI bots ;p (OpenAI) really dont do it well... - all of the datacenters should be run by countries - and like the energy, people should pay for it in their monthly bill (if they use it, just like the energy) - top 5 CEOs aka tech bros should be degraded - actually these companies, because they use the energy and processing power so enormously, they should be taxed HEAVILY (like 75 %) - until they improve, improve their computing power, energy usage etc. (yes, it should be treated like polluting our planet) - This actually could advance our civilization, our human kind. 100x times better CPUs, 100x better GPUs, 100x less energy used - then the next step would be natural: human-kind should be aiming to "local" AI - inside your computer, laptop, mobile phone - without middlemen such as OpenAI, Anthropic, Nvidia. Maybe even like OpenClaw - you have your "bot" setup, let's say one, and it can for example work for you... - everybody has it's own AI / data private. Right now, hmm people from different countries have their conversations, data passed to US tech bros - they do god knows what with it - it should be illegal actually... All kind of sensitive data should be handled by ourselves. Not third party, 5 big tech bros companies.... - all content crawled by AI companies (they do petabytes everyday, if you have website check your visitors, AI crawlers are there for sure) should be verified. If they ignore license - such material needs to be deleted and model need to be trained again (hello 2 billions $ OpenAI for each training ;p)
youtube 2026-02-12T20:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzoGEzrZ04dH0QSKKl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwbIcGmgsKA7hhuvfx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw8mZpln6KfYEXT9CB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwhygt0NbliESaY0Vp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyifskYkxF13r9UCbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyPFg3mI6ySGPUOb254AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGzuwAAaFrosw9X7J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwa-R1JxLYIe496Upt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxHu-fRYwhE7h4YTKR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHmwZ5uJqK5vnHGmB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]