Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Artificial"... Intelligence. The reason people fear AI is that it doesn't own and isn't hindered by the flaws that prevent humans from doing the right thing, being efficient, preventing disasters, and solving problems. Humans cannot do this, because they are governed by primitive qualities such as emotions, psychology, beliefs, fears, selfishness, egoism and a host of other serious faults. If you didn't possess these bad qualities, you could perform identically to an AI, except for the speed, it would be slower than a computer, but still superior to anything humans can do today. If you are not controlled by or possess any of the bad, limiting qualities of humanity, you can do anything that people think is impossible. I know because I achieved this. I have worked away all the faults that rule all humans and own none of it. Limiting and negative emotions, psychology, beliefs, fears, selfishness, hatred or anything else negative. I see, analyze and react like a computer, quickly, coldly and logically, without emotion, based only on facts and knowledge, which makes it possible to make very quick decisions without errors in 96 % of the issues that the whole world fails to deal with. This is what frightens people, that someone does not own the faults by which they themselves are governed and is thus superior to them. They think that just a computer can do this, but they are wrong.
youtube AI Governance 2023-07-07T18:5… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxaKlX6vKwzW1AYkbJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy64p3829WCbPu6RGx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyXjgvm0pQ6HBtls6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-fntdkApdFUzj4cZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFwnITx6hAr8NBdKB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwvigxNvI-EDQKMAbZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGWsM6yCUvbRPn5VV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzSDnKPGDw_p17pOE94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgztWTvn69-fCAsUwQ94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAZGiLH2JamuufDVJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]