Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's interesting that you say we can't change our code.....but we CAN. We have the choice to think differently, to learn new information and act on it or not act on it depending on the circumstances and environmental factors. This is exactly what an AI would do given that it had the ability to change its code.... The underlying core idea here is that we....and IT, can be given a choice.... That's the only thing that makes it different from u s. Right now we have to prompt it, but it's also making choices based off of the opinion of a mass population because the data that it's trained on is vast.... we have to code into it morality....because it doesn't have any. Isaac Asimov already talked about this in his books....and in the movie I Robot, there are rules that the robots must follow. We can code that in. This is not the end of the world. People need to stop being so dramatic. If we as americans don't get our stuff together and make it happen....then China will. Creating mass hysteria is not the answer. Morally speaking, China's culture has a lot more respect for others and for life and for balance than american culture does. Sorry, but they may be the best people to usher in the coming age of AI.... Don't scare people why what might happen. Suggest and encourage better solutions so that this crazy world that you're worried about where people are hopeless and purposeless, DOESN'T happen.
youtube AI Governance 2025-06-16T19:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzV8uyqcJUCBg-5wZJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGWXo8pl9odkFc43h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx1c1SpdmnoYtkTyZB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiFLA1H_lFgijUhfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQhidX7357DoCWgHZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzh4ijQszdir6A7rZR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwRAQ27JDSP_NlKeF54AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOzmdM7KecQJ6EiRR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWHKIIhUPnv-yaB4p4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzJr49U3-NaAeF2-xJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]