Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As humanity I have a few different theories for the outcome: 1. we'll simply not be able to program any AI smart enough to do that, simply because we're good at ignoring the things we don't know or can't know, or even the things we can't know that they are outside of our ability to know that we're unaware of them. 2. We somehow manage to do it and some individuals think they're clever enough to use it for their own advantage. 3. Before we as the current living generations have to worry about the complete extent of that, we might as well be dead thousands or even tens or hunders of thousands of years before that actually happens. I mean we shouldn't forget that it's easier to make people believe you did something great, compared to actually doing it. ChatGPT for example is probably still a bit overblown, though it feels as if the hype is slowly dying down. And another independent thought of this: what if we as living beings with consciousness were actually not much more than biologically run machines that developed emotions and other things to survive in the world? I mean for nearly everything we can find some logical explanation why it happened to us and, just like mentioned in the video, what makes us independent from the actual machines? We're just as well made up of atoms that react to the laws of physics, so why should organisms be an exception to the rule? I mean in what system do you have a rule, which can have exceptions that themselves do not follow a rule?
youtube AI Moral Status 2023-09-04T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwuvPMvW1Yd5WAkGBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3gnCoP-xn7-S94MF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCEPREMTYhaJWhC5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwk6wTlq55JmN6yt8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxn1WdFZU03E9Dt6NN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBaL9lMw5ttOIWXfJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz2p_H50QBytY42knB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwq61-e9i9c3mZ6Hy54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_tLfNlBoYL7u6GuB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzshgq8A2AcZcL7Ftt4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"} ]