Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What everyone needs to understand is, AI DOESN'T NEED to be in the same overall level of a human to be able to do harm. They're already better than us in many things and very good at SIMULATING others. All we need to do is to create one that simulates self-preservation and self-awareness and self-perpetuating and is connected in any means to the internet. Done, judgementday.exe in a nutshell.
youtube 2024-03-13T11:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAIQWdhka0W2tkgWV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxO-Ml3WkgZN35lxOp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwP0CEsw8TkNyJfbyB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyebo3dIVaybKCzPjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxhSg_hj9EtcvpBpcR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyOMK9pS_7hX0F2x5F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy-EzZvXtLEO5bYzCZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxZZymxW2GukYG1sOV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDt4diDrqXrwfXdw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz_w1paPRmMaqq-1wp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]