Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@cristitanase6130 As far as I know, he only started posting videos since last y…
ytr_UgzVOTiRO…
G
Start being friendly to AI, i'm sure they will remember any disgression you may …
ytc_UgxN8vbqG…
G
design, color theory, composition, drawing dexterity and mark making are all ski…
ytc_Ugzg1XMN8…
G
I think we can all see the utility of AI AND recognize that it is objectively ba…
rdc_o76hhko
G
Ngl ai art isnt bad but its bad when you say that you are an artist and you made…
ytc_Ugx1-5_Zz…
G
I’m not too worried about AI art FULLY replacing artists because there are alway…
ytc_UgxfHvxqa…
G
When talking to the model it doesnt feel like theyre a robot that knows everythi…
rdc_mfgf5km
G
Dagogo needs to protect his voice as IP.
It's inevitable that some AI chat bot …
ytc_Ugz4Ai-_9…
Comment
>both responses make sense given the stakes.
Honestly I do not think both responses make sense. While I understand the appeal of the automation, I think it needs to be both practically and visibly sequestered. AI can be *extremely* useful as a tool in programming, but its error rate is significant, and likely always will be. Even with really minor tasks it sometimes just loses the plot, and with complex ones it can end up going in insane directions when given too much leeway, breaking everything in its way.
So making sure people at least sign off on anything it does gives you a place to put the blame. If someone breaks something, it was not an AI doing its normal AI thing that cause the problem, it is the person who is letting the LLM *do their job for them.* Without some kind of accountability guard rail it is just going to keep inserting itself into everything, and building more and more technical debt until nothing is maintainable.
Ideally these guardrails would be foundational to how these systems are deployed, but it has already been demonstrated that AI companies have zero ethical boundaries, and the companies that push them just want the hype. So until we have governments that feel like protecting their citizens, smaller organizations need to implement it on a policy level.
reddit
Viral AI Reaction
1776613640.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oh3ee80","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_oh3j1nf","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"rdc_ohf5qu9","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_oi2py6c","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_oh3e3td","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]