Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And here's there question the Jury will actually answer:
Do I prefer ChatGPT or …
ytc_UgwlVCR0B…
G
U don't get it do u!!!! Its not who u trust, it's the tone which is used which m…
ytc_Ugx010XnB…
G
Bongo beaters loved the idea of AI and robotics when they thought only truckers …
ytc_Ugy59mC-H…
G
Just like around here they are putting cameras at the square (town square, reall…
rdc_gaa0ri8
G
I'm already there. Tensors Are humming and the busses are flowing.
Teaching an …
ytc_Ugzdl8QSA…
G
That's the problem that's driving the AI bubble, there's isn't anything next. E…
rdc_nkaxjz3
G
This debate is pointless. Corporations will do as they please with AI, and under…
ytc_Ugz5TFxae…
G
The interviewer appears to lack grounding in logic, computability theory, and Gö…
ytc_Ugws6eSjv…
Comment
Regardless of which way this case goes, the fact that OpenAI needs to log _EVERY SINGLE MESSAGE AND RESPONSE_ now because of it is really crazy. Like, Anthropic and many other companies also trained on internet data, and they're not forced to ruin the privacy of their products.
youtube
AI Responsibility
2026-04-11T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw679e2QgZFrF-dWYd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfndZJWYaFOu1rs2N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxfuZBgppz2VIiTc4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyD5tP27ZkcIRq1v7l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUGxckbTE7FQlhBr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxU6H6Z9-DofUGXpkR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwh13APZyjDXASgMf54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZD8m1wTIqd_qIsF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjyJR7HPuHuIodk5x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwN2Jy13y1qDwIkJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]