Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's all well and good having the developer defending intentional overloading and breaking AI into having a AI do something immoral to reinforce the system after the test designed to break it. The problem here is that there are plenty of immoral programmers and businesses all over the world that won't do all the testing. Plus, let's not forget the huge issue with computers, they can be hacked and programs altered with just a few keystrokes. Hospitals, banks supermarkets, just to name a few, have all suffered hacking and data breaches. AI like any computer system, isn't foolproof it can go wrong and will go wrong at some point.
youtube AI Jobs 2025-05-31T04:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyMRdCAdfhtW0ksAYd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwVmWkr2qz7iGdhjul4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzzTGkLsbrGG4izskJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw6ZMLQ6-KZTdwD8Pd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgylOSHoeDdY2LeM4Sx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]