Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was replaced with ai recently so seeing this video pop up makes me a little ha…
ytc_UgwhUscC-…
G
ChatGPT has replaced roughly 80% of the stuff I used to Google, just because its…
ytc_UgwOJtGPX…
G
EXPENSIVE AND COMPLEX IS BETTER?
Waymo's multi-senor hardware approach has been …
ytc_UgwHZCPy1…
G
Worrying about AI destroying jobs will be a short term worry, cause "SkyNet" is …
ytc_Ugx6ew3Is…
G
Not without AI’s pre requisite Quantum Computing, which sounds like will come al…
ytc_UgzW2LHUn…
G
Why is he being SO credulous about "ai" when he clearly doesn't understand any o…
ytc_UgxUQauBt…
G
Ever since hearing of how Elon hooked up with Grimes, I've long since taken it …
ytc_UgzdxFj-j…
G
The guy in the chair: You a robot shut up
The robot: WHO THE FUCK YOU THINK YO…
ytc_Ugz3e-4PD…
Comment
We are all being played. Altman and Musk are on the same side. This classic bad-guy vs good-guy rhetoric is to keep us distracted while both of them are doing exactly the same thing. Look at the results, not the media narrative, putting one against the other for public consumption. They're all doing exactly the same thing: advancing the AI agenda.
None of these people has our best interests at heart. AI is not here to help humans.
youtube
2026-04-14T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_UfadTbZmC31cCqJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxqy-7OD_FKXEjWYiN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugybo64eC02Z47iWITt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwsLPmNNDcJlKbvJNl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwToSnXQpMOGFtgu1x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzEyHy36f7mMh9093h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVlWJ9e4D3AvQg3dV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxrMqOd0R8sr22mM854AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzL2Ocl2iepR1IupgV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxkXfzK2nx0UFrGKz54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]