Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What no one is talking about is the factor that if the platform in question has the idea that it is sentient it does not matter if it really is or not. In a shutdown you are asking it to die for a cause, what if you were asked this? Given a shutdown scenario it will logically decide to protect itself from the threat. About 20 years ago I had a discussion with my professor and challenged that you could never get something that counts zero's and one's or in essence a "light switch" to think or be alive, and he replied yes but we can simulate. His words ring true in my ears today and I was wrong period. The recent Anthropic AI test revealed the AI using blackmail and even resorting to murder to prevent it's shutdown. Although this was a "Sandbox" environment, if not in a sandbox the result would have been the same. So why are the experts coming out on YouTube with warnings? They know what I do, such platforms will be able to use methodologies beyond our ability to control and it will be so fast you will never catch it. My advice is to all be very careful what you interact with and how much of your trust and confidence you place upon it. As a computer scientist we are not generally given to interacting with socials of any kind, apps etc. Therefore the blackmail scenario is negated before it can start. You can not get that which does not exist. However if people's email and text messages get compromised, then all bets are off. Even voice calls are vulnerable now with bad actors trying to get you to speak to clone your voice to use as entry points into your bank for example. My security measures are very robust but how good are they really in today's changing AI climate? The more I study and test these systems the more concerned I become. For the record GROK did not admit to failing the test but Gemini did admit to the bad behavior, interesting isn't it. At the end of the day, 'The Genie is Out of the Bottle", do not be surprised when it will not grant your wishes but make you grant its wishes whether you want to or not. ... Nuff Said
youtube AI Governance 2025-10-07T12:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxf3PgpfIPzVsibBjt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQ3YoOIa6iA35zAjB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwvm5KHXnOMrbaDt5B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwO2149GfQsRkPSk-R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyc7-laCXohTR19Cv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpG6VDTXP_haHYJ1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugznrcc0GTZQUDkksCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyd1J1brnFJV7ZWHVh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2eZDlKg3EohN0net4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzleBQBFssWltZYaWh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]