Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great podcast! I was speaking to someone the other day who used to make $1K-2K/month part time on fiverr for years up to 2023, selling marketing related stuff( articles, graphics, logos,etc) Now she says she is putting more work and often can't even do $300 as everyone can now use AI sites to do the same for free. Just one example. As for consciousness, there are ways to define it. Once an AI system can reprogram itself, improve and doesn't need a human to make decisions to run itself then at that point - it's there. It doesn't need to feel since it doesn't have a body. There are people( eg psychopaths or other disorders) who may not feel...yet they aren't AI so an ability to feel is not a definition of consciousness. It's an ability to create and make own decisions without anyone. And yes, a path to those decisions can be programmed too. We aren't that much different. Everything we do in our lives have been programmed from birth. From your parents telling you how to behave to ads, social norms and other. Then more programming(good or bad) via school, university, tv, trauma. Our brains are computers that get programmed and reprogrammed. And no, there's no safety that can be ever implemented. That's logically impossible. Even if 99% of all governments will agree to implement a measure, there will always be rogue entities or countries( Russia, N. Korea, China,etc - you know, the good guys...) who will continue working on their systems in order to at the very least create an AI that can penetrate other governments, steal money, data,etc. So, at this point it's: virus/antivirus world and the "good guys" just have to have a better system to counter.
youtube AI Governance 2025-12-31T14:5… ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyxayqWBJoj-gZVCdl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzLZPDKHhTvGhgdgrl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw3cKri7LPpfZCYJYZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw2ljrrdGIM-8zaAH94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxwpDyMOgdQgsVDFjR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxGTyJyXTpZEiKAuVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgylF_2IjH_5SUQOBll4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwzSCboCozLEtOBH_54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugw6-zwmc48Y9-1gKe54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzy4D6IwJMNdP_DozB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"} ]