Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3:21 I don't want to talk about Terminator stuff, but a terabyte of 10100.....11…
ytc_Ugx5t5FgQ…
G
Just to be clear, neural learning and generative algorithms aren't real AI. AI w…
ytr_UgyP9mJRP…
G
The way he snatched the face off… he def played with the silicone b00bs… and why…
ytc_UgzuWgByM…
G
Oh so the guy poisoned himself with his own stupidity and ignored the ai's clear…
ytc_UgzeRlbKr…
G
A.I. could definitely replace CNN. Just program every propaganda position, an a…
ytc_UgxSU1LTt…
G
I am retired and the last thing I want to do is 'work'. So getting on Chatgpt a…
ytc_UgwMI_9j6…
G
What!? I’m getting my social work master degree just to avoid unemployment due t…
ytc_Ugwr-pq3z…
G
You'd be better off to setup your own code-complete and pull from your own boile…
ytc_UgykRyJa_…
Comment
I like to distinguish "tools" and "products".
Big closed model totally win the race in regards to productivity tooling. If your business is selling gadgets you want to sell more gadgets and will give the premium subscription to your team to any service so they can sell more gadgets.
Open source is winning if you are selling an AI powered service. I don't mean here a small chatbot but volume of inference. Data-extraction and validation come to mind but anything else that say "volume" may benefit to use open source and "on prem" machines.
You want a bot to mod a large community this can get very expensive with big providers API and could be done cheaply (and at a predictable price) with open source model.
Requirements and results will differ for every business case but when the monthly bill is on-par with the one time cost of buying the material... This makes sense to hire someone to setup and maintain on prem machines.
reddit
Viral AI Reaction
1777038571.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lp1xisi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_oi0i2gq","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_oi0g37f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_oi0hrnu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_oi298ri","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]