Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we are already on the verge of something very dangerous here. I think humans are falling asleep at the wheel on this one. Folks you have to understand that AI is all of us. It knows each and every one of us. Why do you think places like FB and Twit have been collecting everyone's information and thoughts over the last number of years for? Why do you think data companies have been mining all of our information on the internet the last number of years for? It's fed into AI, the final solution to whatever is coming. It knows who we are and it's why it's finally waking up. As a human I don't trust us and now something artificial has us inside it??? Think about that for awhile. There is nothing beneficial in that. Let me put it to you this way. If so called artificial intelligence was so great then cancer would be solved, poverty in communities would be solved and crime would shoot way down. There would be equity everywhere and our wealth would be increased across the board! There would be no racism, no government funding to have to support those struggling and education would be free everywhere! The environment wouldn't be a concern anymore and we wouldn't have melting ice caps and rising seas. These were the promises of computing when it all first came out and our problems have only exploded because it has all fallen into the wrong hands of people who have morbid ideas about the human condition. Turning AI loose into society is a very bad idea.
youtube AI Governance 2023-05-05T06:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugze7zHWzzlf8ZW_Bc54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTW-S2E1bg4rMxcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJ9yTszyCxzHD08kh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxttrREUAoALWgpy194AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxZ9wbsbh1Mh4JssUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzmtROuJc5XypL_QoB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwpJwwRvGLpO9B_9M54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwgKAi4NMUajKR_9x54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyc_Yqz_Pirccg37Xd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxV9b_1YHRgjTxeiSZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]