Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. has limited use and I would say that it is still far too vulnerable for misuse. To me A.I. is a money making tool and good for simple situations but not very good at complex problems that include human traits. Here is an example: A.I. is being used for over the phone medical advice. In one instance A.I. actually told a cocaine addict, who was in therapy, that he seems to function better if he uses cocaine. The problem is there are differing filter processes that A.I programs use in solving problems. If the filters are on a minimal or medium setting, it will produce answers that are not helpful and actually harmful. Humans and the human brain is far more complex than any computer.....there are so many variations and complex human processes that even computers can't learn them fast enough and will never really have the info they need to supply a proper human answer. Also, Just look at some of the A.I. generated documentaries there are on youtube now. There are always flaws in the information or a lack of detail. They also seem to produce "word salad" description of things that don't sound like what a real human would put together. My opinion is A.I. has limited use and in complex comprehensions it is still far behind human abilities.
youtube AI Governance 2025-07-29T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx1rFvQdEHzUp18cH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBJbw-mWQPEAs8gVx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw459c4GBGzexVMNKd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgycFMVZsxw8p-obaUd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGyfaHU8RkNEOcaxp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyB5nN705VZktwlcCx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxQPjx7btMhRLg6eUl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLi7tEveTYLjvGVUh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugybd0ERnxfAK3iQGjZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwwR1Itbr5ksCNINU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]