Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
90%+ of Enterprise Ai projects have failed (according to MIT) - don't believe t…
ytc_UgxEyO3YE…
G
The SMR ai got turned on in 2019 when someone said switch from dating advice to …
ytc_UgwNLktpz…
G
Well it works for scientific papers and other fields too.
A single source is pla…
ytc_UgzA--ZfJ…
G
One of the deeper implications in Gibson's Neuromancer is that we will need AI t…
ytc_UgzkAyj_1…
G
It will be cost prohibited for a long time berfore any AI will be replacing huma…
ytc_Ugh9AwpPs…
G
Only AI will find anything wrong with this. So unless you're planning to feed he…
ytr_Ugy2Qs9DN…
G
one can argue that they use AI to cut cost but like
just use official art dawg t…
ytc_Ugw_nioyh…
G
If the truck is not a full self driving vehicle and your job is to take over whe…
ytr_UgxxxzyXt…
Comment
The only "benefit" of the Iran quagmire is that it might just accelerate the popping of the AI bubble due to the destruction of supply chains and the resulting recession. The crazy "investment" into AI and it's datacenters is just the broligarchs looting the world economy in hopes that they will get technology and robots to defend their bunkers in the dystopia that they themselves are creating. Peter Theil, Alex Karp, Sam Altman and Elon don't give a rats ass about the common folk or the suffering that they will cause. In fact they have likely planed for a world with far fewer humans.
youtube
AI Governance
2026-04-23T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzN1RVez7mDDMSfw9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1cBQDdpmkdmnGRGh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxY8pM83m2ScZt04OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6wxwdzr5anc-b-gx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwB3MiJWMxSm0ZsKNR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzJyQRmvx_WqLPeSu94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZFoSpPSlZWbk0GO14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2KuCwVrmZ-k6yvVZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugys59ifvGa9hYV0Uwl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyE9MklwP7rvByUu7h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]