Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing wrong in using LLMs like chatGPT to code if you understand the code and …
ytc_UgySzez6P…
G
@OyVeey LLM isn't a Markov chain, in Markov chain present state is only conside…
ytr_UgzSRTfMI…
G
I'm sure everyone will say it's AI, but I'll just pretend it's not and think tha…
ytc_UgwFuL6fA…
G
I do not want my mind be in hive mind with religious people, I tired of this cra…
ytc_UgxR7dFzo…
G
I beleive this AI is more dangerous for humans. Ots already killing humans think…
ytc_UgxeUpDDo…
G
We can't teach them empathy or kindness or feelings. IMHO, AI are psychopath-lik…
ytc_Ugx4g-nnd…
G
How about just drive the damn thing and pay attention. Lives are at risk and a …
ytc_UgxkQ420y…
G
The problem is it's not AI. It's a program and THEY programmed it. Now think abo…
ytc_Ugx0Nsm8K…
Comment
It’s actually much simpler. AI will never take over because you can’t blame it. Imagine a CTO as the only “programmer” with an army of AI. When things go wrong, who can he blame? Only himself.
And that’s not something management can tolerate. The one thing AI can never do is actually be accountable. For anything.
youtube
AI Jobs
2026-03-10T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzS8Wdn7SpBw_ZdXPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwUTOCjaUAEg0C0bf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy6druwGj_n-Fd3jyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwn1Jhni-tPCMvn-A14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwJTLw7078y9s4ALop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxp4om7BI_IIMrucXp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgyzYvpyElRbThptvot4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxTRpl4kmC01jD1QMR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx3Kv0ADbNkDsT8FFZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxR1U39RLxGPASKUtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]