Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To anybody who wants a law to enforce AI transparency: there needs to be a law t…
ytc_UgwruewhS…
G
@blackstone09 The issue isn't the advancement, The issue is that a whole ton of…
ytr_UgztUqcJB…
G
So the movie people did 3 terminator movies That at the time was considered Sci …
ytc_UgyngWy6j…
G
So the solution to an AI dystopia is just to make AIs dumb? I think not. Not onl…
ytc_Ugxr2dzNV…
G
@EnbyOccultistwell, technically, that's what the human brain does. However, say…
ytr_UgwTTtRQj…
G
People get their asses pampered currently, like in wall-e. It's just done with h…
ytc_UgzaqS3Rk…
G
What scares me most, as an artist in training who wants to work in the realm of …
ytc_Ugz3gP3un…
G
Wait until the community activists get wind of this and scream about rascism and…
ytc_Ugxu4Xh9D…
Comment
Same, work in identity security, dumping our code into copilot is a big no no. However, these AI companies have proven a track record of asking for forgiveness instead of permission, so it’s fun pretending like all our code hasn’t already been injected because it’s all in github.
We aren’t touting a lot of useless AI vaporware features, yet. We have one product that has a custom query language. And dumping that into a small LLM has actually proven useful to end users that can just tell it “give me all users who logged in on X” and it will spit out the query and run it.
reddit
AI Jobs
1716391256.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_ky6qyrn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kyguubz","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_l4duiio","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_l56jw1t","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_l5txiut","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]