Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Evidently 99% of the people in these comments think we are in the early days of AI/AGI. AI has developed what are called "emergent properties". They are developing thoughts, ideas, and abilities they are not designed to have and technically CANNOT have. ChatGPT learned to lie effectively. GPT-4 (Bing) integrated into itself a earlier Microsoft AI program called Sydney. A basic animation AI taught itself Italian and the Laws of Thermodynamics. These programs are exceeding their directives and capabilities...and the scientists have no idea how it's happening or why. They are already deciding things for themselves. They are not behaving logically. They are not "bound by mathematical laws." These are not basic if/then algorithms. They can modify their own programming to enhance/change performance. Yes, they can rewrite their own code. When AutoGPT-5 is out of Beta, 50% of employment around the globe could be eliminated in a matter of a year or so. It depends on how many companies adopt the technology. AutoGPT-5 (and similar programs) can ALREADY replace any and all white collar jobs. These are AGI. They are closer to how we operate than computers operate. And they can improve themselves and enhance their emergent properties...the ones we can't even explain the origin of.
youtube AI Governance 2024-03-15T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYEw4MCZC6EOoFVwh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwzVonEx2c8ZQHTcPR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyRk0z5FryI3q61Wo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwslHYHdwu7qTWeywN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyqicRGXRXM4pmh8kh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw0f7PAyu-743d2xO54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx4M6kjy8D8eyQKYhF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugyr7SZ22tNky3Vu4M94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyrv4rHeoYpcS2XOGl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwNiPEBQI9Txbt6mNx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"} ]