Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When AI books into therapy and during a virtual session under some deep fake; i…
ytc_UgzTFRADU…
G
But what if humans generally become more skilled? Hasn't that been a trend for a…
ytc_UgzsalTjv…
G
It also spent 30000 context tokens attempting to convince me Santa exists.f
Lea…
ytc_UgzUaR2zj…
G
Turns out it isn't going anywhere, just no more Disney filters, you think regula…
ytc_Ugxp5AUlM…
G
Thank you. They don't understand that even if they got what they wanted, they wo…
ytr_UgyHEMdOG…
G
AI is not a tool, it's a service. It also doesn't give you your idea back at you…
ytr_Ugz0Uw4kw…
G
Don't forget that every software system has an error rate. When they're boasting…
ytc_Ugz45l-Yr…
G
What he's said at the beginning is 💯. If we don't lead with AI an advisory will…
ytc_UgyxnBDIo…
Comment
I think it’s cool and has some neat potential uses
But in its current incarnation it’s entirely unethical (and yes ugly)
Edit: some potential uses I’ve thought of a sketch artist for law enforcement or *maybe* a tool to create concept art to submit to a real artist. Regardless the only way I’d ever be ok with it is if the training data was gathered consensually and that the ai “art” is never released as the final product
youtube
2024-11-09T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyag1er_Yn4aeT_h3x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRMWp2XDSHN2INPgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxS3RA05gIYsZyKvKd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxkm1L6Lzz7sJwunzl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugz3Wcd02B-mj2dp3st4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwpPPJBjgUCFoiRMx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugzi7k3Cr5gxpuxNJH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuRQLZiddimjvPyvB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyedYQO648RchOgUSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIBDNvwi_KVBai-s94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]