Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, because as Eliezer Yudkowsky and Nate Soares wrote in their book "If anyone builds it, everyone dies", "you don't get what you trained for". The goal was for the LLM not to cause harm, but sometimes inaction causes harm and potentially even more harm than action. Someone commented that when the responsibility lies on Alex it's fine for him to pull the lever, but when phrased in a way that makes directly ChatGPTs responsibility, the safety guidelines kick-in. Another example of how reckless and dangerous it is releasing ChatGPT to the public without know it works and why that is, first. Side note, when you say "most of humanity", in reality that's actually the data available for it to train upon. My point is that there's going to be huge biases in the data and the fine-tuning.
youtube 2025-10-23T13:0… ♥ 6
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzwyJTG7HTUBlbYK-t4AaABAg.AOSRNIaABzVAQSfaZCWfsY","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxrdUgl4-A1pHb0QW54AaABAg.AOJkTKqNqzUAOcHVzXiSRy","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzZNtd8FA9ycsdX4Hl4AaABAg.ANu1r1r_pPJAOCoRUkZ6kg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgykxUQfBhoyksi3Fc94AaABAg.ANsrxPkeB2_ANt8k165Cyt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzC1vkMqNkt6GByxmp4AaABAg.ANrzQ7BkxMaAOBTZHYOqqK","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytr_Ugwh6frJ4pBHFyr-JXN4AaABAg.ANr00JB5wX0AOScJgiZ0UO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwclWSvkN9OnEh3msp4AaABAg.APpWfyQ1NVBATn4jI1vO-q","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyBmsLm3FexIMeprKF4AaABAg.APCcxhnkFxEARKf9baCaHA","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugxux27tOGuoOGnscad4AaABAg.AMjRucnSv8uASbauppUFJL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgysclNkGpVPDPHq8314AaABAg.9umfzBBMsf_AGR9NbKwVD8","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]