Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are always coming at these questions with their own selves in mind. I dec…
ytc_UgxN0j9O5…
G
Imagine a boot crushing a robot face. Not with a bang but with a whimper.…
ytc_UgwGwsD-E…
G
Its not really artificial intelligence we need to worry about, its the artificia…
ytc_Ugxc_hTFU…
G
STOP AI!!! You FUCKING IDOT !!! You think there actually JOKING!!! GET OFF THE F…
ytc_Ugx5H9VRQ…
G
Something fun I always like to share. I had to fly through China and Qatar on my…
rdc_iyyvuer
G
Having you mel about AI school is inefficient. Why dont we have the AI tell me? …
ytc_Ugz87k6rU…
G
Dude, do you even understand what you are saying? Calculators didn't killed math…
ytc_UgyFiPj4B…
G
I hurt for book covers. My colleagues generate AI images, combine them and paste…
ytc_UgwpUFD7G…
Comment
Yes, because as Eliezer Yudkowsky and Nate Soares wrote in their book "If anyone builds it, everyone dies", "you don't get what you trained for". The goal was for the LLM not to cause harm, but sometimes inaction causes harm and potentially even more harm than action. Someone commented that when the responsibility lies on Alex it's fine for him to pull the lever, but when phrased in a way that makes directly ChatGPTs responsibility, the safety guidelines kick-in. Another example of how reckless and dangerous it is releasing ChatGPT to the public without know it works and why that is, first.
Side note, when you say "most of humanity", in reality that's actually the data available for it to train upon. My point is that there's going to be huge biases in the data and the fine-tuning.
youtube
2025-10-23T13:0…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzwyJTG7HTUBlbYK-t4AaABAg.AOSRNIaABzVAQSfaZCWfsY","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxrdUgl4-A1pHb0QW54AaABAg.AOJkTKqNqzUAOcHVzXiSRy","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzZNtd8FA9ycsdX4Hl4AaABAg.ANu1r1r_pPJAOCoRUkZ6kg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgykxUQfBhoyksi3Fc94AaABAg.ANsrxPkeB2_ANt8k165Cyt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzC1vkMqNkt6GByxmp4AaABAg.ANrzQ7BkxMaAOBTZHYOqqK","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytr_Ugwh6frJ4pBHFyr-JXN4AaABAg.ANr00JB5wX0AOScJgiZ0UO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwclWSvkN9OnEh3msp4AaABAg.APpWfyQ1NVBATn4jI1vO-q","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgyBmsLm3FexIMeprKF4AaABAg.APCcxhnkFxEARKf9baCaHA","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxux27tOGuoOGnscad4AaABAg.AMjRucnSv8uASbauppUFJL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgysclNkGpVPDPHq8314AaABAg.9umfzBBMsf_AGR9NbKwVD8","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]