Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
THIS IS CALLED DE-HUMANIZATION!!
CAN'T YOU ALL SEE!?. SINCE THE PANDEMIC WERE AL…
ytc_UgxtkpOpV…
G
I don’t think this is completely the AI’s fault, it’s the people who are misusin…
ytc_Ugydncrk2…
G
the clean water fallacy is such a joke. AI companies have so much money, they ca…
ytc_Ugxdn9mTu…
G
This! How many vulnerable people will be fooled into making poor decisions at th…
ytr_UgzObLMZg…
G
John you are missing a lot of opportunities, If you would have invited famous / …
ytc_UgwCsTJyF…
G
I think the problem we live in this depressed era are those damn phones. This is…
ytc_UgxvNBArO…
G
That's subjective as hell. The AI art frankly looks breathtaking while the artis…
ytr_UgxYjAOba…
G
this is already happening. Salesforce just laid off 4000 people saying AI is re…
ytc_Ugzm3ltJ1…
Comment
To me it was clear Ezra wasn't comprehending the mechanical nature of A.I, especially LLM's, and couldn't get away from his initial thought of coding safeguards in. Eliezer explained why this was futile more than once, Ezra seemed to think he was evading something when he wasn't, because he lacks understanding of regressive model training.
youtube
AI Governance
2025-10-17T12:1…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx4Hnkh8xD9SF74W-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyzzw73Hyi8hPakJp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMLKfL5W0viIGu9xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK9OtyDIgvoXHpFuJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0hMXzNfinvXrNCMZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwuRAK86clvTotycaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySIJo2bTtKqmykzwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4zfh4kEJyfwJQAcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGMvFT7Dhf22kjNg94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvWlx3cblKTjD4z8h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]