Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's difficult to refrain from being concerned about watching Geoffrey Hinton(THE GODFATHER OF A.I.) sitting across the table with/from a "successful businessman(entrepreneur), Steven Bartlett having a"cordial"conversation regarding what some of the potential high probability dangers of artificial intelligence are. I admire Steven Bartlett tremendously, specially for having overcome the K12 mountain of odds against his chances of being successful in life, and similarly for Geoffrey Hinton. However when I heard that phishing scams with the aid of A.I., rather countless variables of scams in general are lucrative, and a plausible answer to the majority of the work forces jobs being displaced by A.I. I'm by no means an extremist but when I heard those threats, my ears tweaked and when I heard that A.I. could. 1). course enough financial ruin, as to close down a bank, that's scary, an entire financial institution demolished! and 2). just with a couple million dollars, essentially create a bio weapon which could spell out the end. Is it just me whom feels that listening to this , well it just doesn't sit right, and I will explain why. Surely Geoffrey Hinton along with the entire world's specialists, engineers, scientists and professors having extraordinary understanding and whom are busy developing these general and super intelligence's should be having this conversation Infront of a world summit sort of panel environment, apposed to you, me and the other chap online just now.
youtube AI Governance 2025-06-18T11:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxxO7lnDrNr1B3I3lt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKFmQQ3MhOcC5knCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzgtHcQkoIgxe5gn_Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuYN6nt8x_QIfCH354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6hfu8uZ804zv8BIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw6ZgX9MzvGzxZowFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0isCCMay1rN9iNGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw9EgKrePe9c39_9ap4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7ezXI7SalLW8IsBh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_UgwPLEOFH7vnuV6BRrp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"})