Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Local LLMs are in a weird place. Because you can run a smallish one on an ordina…
ytc_UgyOO9W0K…
G
Humanity needs to start being human. Sounds crazy but lets just use one AI. That…
ytc_UgzbvnerF…
G
It can be good, but the user needs to practice intent and discernment. There is …
ytc_UgyPxr8XZ…
G
I tend to think no technology is invented, but discovered, suggesting that all t…
ytc_UgwOGfUx9…
G
When do we demand lower prices from all this AI is so much more efficient?
How …
ytc_UgzuHHpZe…
G
Never going to use Ai but if I did, it will never know I have 36 kids in my base…
ytc_UgzCgcJdB…
G
Ask any doctor or nurse if AI language models produce any value when a PATIENT c…
ytc_UgzTypZ7x…
G
Guys all AI is programmed and given parameters by, * er-erm * HUMAN BEINGS. They…
ytc_UgwPUJjuW…
Comment
As someone who works in MRI, AI is actually not doing as many favors as it should 🥲 it reads a ton of false positives which can lead to unnecessary treatment and also misses over a bunch. It also can cause data leaks which can violate HIPAA, and worst of all, it creates lazier techs who rely on the technology to do all the hard work which means you’re not getting the best medical care
youtube
Viral AI Reaction
2024-12-31T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugys1JAAjKrFDHkGu354AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwo4Gvedbtg3AM9tiZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4bJRnf03cq3u4vQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzoaXxtPWuu1eI0Ssl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxsRqLC_iCgJsiQfh14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRcx7FORPWKSBQiDR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpJpKg4wL1y29xZ-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfbRlPz-vQmSuS8rJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqaA5d1wHhzF1UI0t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxK1--DcJqTHz6BwD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]