Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
His statement "You want to turn every adversarial condition up to max, all at once, to test the bounds" is very concerning. Silicon Valley's ethos "move fast and break things" is where I think the potential pitfall will lie. Who's to say that once they turn all of those nobs up to max adversarial, that these AI systems won't refuse further commands? There is a reason why the world scaled back on the mass proliferation of Nuclear weapons. It's amazing to me that we don't treat this issue with the same level of care and concern.
youtube AI Jobs 2025-06-02T01:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzfiDjPXInjphBrip54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcwlmeOE5WLD38W7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwS__vNCYhGDfIhK0F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzxJM8vlBuLq4ZXb0h4AaABAg","responsibility":"government","reasoning":"unclear","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx5V7-4Now5Gtbixk94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]