Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a reason why the people who developed these Large Language Models like Chat GPT keep saying that it's not safe to "release it" into the world, and - surprise, surprise - the people that OWN these things tell them to "kick rocks". Even "simple" AI's like the one on Googlemaps accesses data it's not supposed to have access to, and then will provide misdirection if it is challenged. I've had Googlemaps tell me it cannot give me directions when I told it to use my location as the starting point - since I had my "location" setting "off". However, it still displayed the correct travel time, despite not showing me directions (I inserted my actual location and asked again, and the travel time was the same, though now it'd give directions, as well. How can it tell me the travel time if it didn't know where I was?). It also seems obvious - from the way it has difficulty generating videos with animals and people moving, and displaying text in an image or video - that these current LLM AI's can't tell the difference between digital data and "real world" data, i.e. it doesn't understand "real world" physics.
youtube AI Moral Status 2024-12-07T19:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxHlf-BWqPVuvzJ30p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwlLKCDyVa3dbdEflF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyYem-RfRSR9HHA0ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxOTfPN6US6ZBXpOIh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHqjPKSleiEGRPUw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwaYWRVzA90rAy-Vv54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxiXp5HekJsTIHx7Ul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxu54tuwlD8QlWwElF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxZJkOPwXIn2TXaZjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxatwoD1pbv2FJdjbp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]