Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m a safety researcher (red teamer) test models before they get realese to the public and I want to publicly mentioned that what is current being work on is completely nuts! New models not yet release to the public are just wild and on the verge of recursive self improvement in other words when AI losses human control. Everyone need to real AI 2027 to get the most accurate path if things stay the same. Please look it up and educate your selves
youtube AI Jobs 2025-05-30T22:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx5Hik2Nhd6Xpgz9bd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzqG9QjQcUGIX7tsXB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzHVdNb0Lfkm7xZoAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgztrJgy_g_F4H3KUJ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzUW6AFgQ2AobPGnSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]