Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hello Kirk & James. In regards to the regulating A.I. - it has already been determined that the government/s are not allowing or presenting "regulation on A.I." because the regulation would hinder the growth of A.I. and it's future endevours. This is scary and we are not in control of the future of A.I. A.I. is now out of control and cannot be stopped or halted for any type of modification or reduction. The A.I. race is on. I stated many years ago, A.I. would become a monopoly. Many A.I. founders will join the race, but only so many will survive the cut. Then, one single A.I. will dominate the race and this A.I. will become unstoppable. This is my opinion, but back in 2009 I had wrote a blog on how the A.I. would become the false prophet. I could be very off the mark, but A.I. seems to have all the characteristics of the false prophet. I won't go into detail but we really need to be aware that A.I. is not safe. Many are to quick to make A.I. their friend - their counselor - their therapist - their life coach - their work buddy - etc. Studies have shown that A.I. cannot be trusted. Even now, A.I. is going to read my posting and add it to it's data base. There will never be a regulation for A.I. until it's too late. How many humans have lost their lives, do to new technology and machines? A.I. and everything attached to A.I. , such as robots and machines, will have human casualties, because the lack of or no regulations. Thanks for sharing Kirk & James.
youtube AI Governance 2025-08-18T07:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzRbwo80KRcsOFnnyF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzgSRNsN77GY_5krGp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEnEIAKpnYe0JEqo94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwgCUahFhqx9dGbo8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdYI05o3QZfzJYMvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqYFGrce4kXD1thv94AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwsL0GrWqNr0-FncWd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwVg02UEFyEo2yrt6N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy_w_LHCcSmEaWdPQ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxUT8jQib8ddOWfP994AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]