Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
as some1 working with this everyday i can guarantee you that AI as seen in the terminator or matrix and so on is something that's not to be feared, and will not be a threat for a loooooooong time. The thing we call AI today is nothing like scifi AI, and it's even wrong to call it AI, but we will go with it anyway :) AI is super usefull (but yeah can be dangerous as everything else, since humans control em (or in this case program them)), they are usefull because they can do what you program them to do super well, but at the same time they are dumb as ****, cause everything outside of the scope they have been programmed to do, they simply can't, or atleast they are exeptionally bad at it (think toddler level bad). The AI robots you can see fx. here on youtube may seem like AI, but they're not... they are simply an expensive chat bot. they are made by feeding it alot of questions and answers into an algorithm, and then when you ask it a question it will through the "trained" algorithm give you the most likely correct/appropriate answer in return. It's basically the same you do when feeding an algorithm statistics to se what future outcome is of new input (like a weather forecast), and in this case no one would claim the algorithm can truly see the future, but only predict the most statistically likely outcome. so the fear of AI is ungrounded, though the fear of misuse of technology on the other hand is a real factor. like with all other technological steps in human history, if it can be used to kill people, it probably will... but at the same time they got the potential to bring a greater future with it. :)
youtube 2020-10-06T22:2… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw74J2-rdNOx6eVLzJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxfw8Nc6RnPwSSJoX94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyAKHvT1YHHg2ZoxPJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5c9n4tbkRsx-FHmh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwFFwjyeC3aq9nZzst4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuWa3WV8plW8vHVnN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxbcfch8cgzynw_PVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyV6zwPqQ8y6rb3qvt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEb7lCBa7GiTUWLpB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDZuuYngqD-DuMXgZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]