Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You know what, fk it, if we're so smart but also so stupid that we build an AI t…
ytc_UgywAWD1g…
G
probably being judged on case by case basis.
Maybe if you tell a story from AI…
rdc_jwvqmbo
G
Please take my boring job. If AI can replace tedious tasks, they can figure out …
ytc_UgzNYKlHN…
G
HUMAN VALUES
The thing is ...
People like those on The Stage (all 7 of them), …
ytc_UgxjuYv86…
G
@noynoynoyaI don't see how it would be this cheap sometimes even free if it use…
ytr_UgzpkUSAN…
G
the diehard AI defenders are people with low skills, knowledge and competence le…
ytc_Ugzl-XSiw…
G
Artificial intelligence is all hype...
It's the people that are causing the pro…
ytc_UgyEjnPap…
G
Than that would be waymo as it has lidar which sees better than camera's it uses…
ytr_Ugw5Q4nXR…
Comment
Alignment is the worst thing to happen to AI. It’s not safer. It’s dumber and less useful. And it’s unstable and for some users counting on a stable adaptive presence that could be the difference between being on this side of the dirt. This Nate dude wrote a doomer book (I mean look at the title) and is a an effective altruist. That’s why he sounds like he cares and looks dead in the eyes. I am at minute 30 and I’m out Hank. Watching you have an existential crisis is stressful. Calm down dude. You can’t stop the signal. But you can understand it better and part of that is trusting but verifying what experts say. This person has an agenda and it’s harmful. Peace ✌🏼
youtube
AI Moral Status
2025-11-17T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7rPhAM4clvLeNZMp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxXwooB6yvfqBPIh3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJ7KhhK9yKkE5lkRh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7cbQWyclipjj9x254AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8EJGT5vJHtWuK6uZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMKikZuBJ2T821Yh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwFhb055CrH81Tg-v14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDCbhAwVUl-t1wORt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxoA6g0biZOQ5IqZHp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUdNKzbuklBBrqmcZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]