Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would start by saying thank you for another great episode! Always a pleasure t…
ytc_UgwbRukw-…
G
Yep. I mentioned that scenario to my wife not long ago. She agreed that hackers …
ytr_Ugx1bSjH_…
G
Senator Bernie Sanders, hello, we agreed on what you're telling us, but most of …
ytc_UgypTS4sv…
G
This video made me feel super weird. I've been talking about these issues and re…
ytc_Ugymwylxc…
G
Post WWII America had a massive manufacturing boom because we were pretty much t…
ytr_UgxS3oCc2…
G
My personal take: Any entity able of self-reflection (which is a more nuanced ta…
ytc_UgzY9MQI6…
G
I work as an AI designer for an automation company. One crucial point that often…
ytc_Ugy2iN1CL…
G
You know that coding is closer to being replaced by AI then art making is, right…
ytr_Ugz2C7-81…
Comment
I spent 30 some more years in this field in some capacity, most of it focusing on human emotional analogs for very specific types of usage conditions.
Research goes back to The mid-1950s with the very first so-called "AI" program, Eliza. I put that in quotes because it's really not artificial intelligence or any intelligence whatsoever. But it proves a very critical thought about the entire discussion we've had over the last countless years, long before the marketized hype.
Technology when used properly is a blessing. Technology when used improperly is an absolute hideous curse. That is what we are seeing both on the corporate and "private" levels with this saturation of bots.
I deliberately put private in quotes because truthfully, it's not private. When you look at the influencer market, it's being paid for for deliberate manipulation, the human emotional analog context used in the worst way possible. Really, no different than character AI, or replica, or some other kind of deliberate parasocial manipulation construct.
The proper use of a human emotional analog would be in grief treatment or some other legitimate form of counseling under the careful watch of a licensed clinician. Human emotional analogs are incredibly powerful and incredibly dangerous.
There are many laws that publish the training of AI models for human emotional connection, but the sad reality is that the actual companies, like character AI or replica or any similar product in the Google or Android stores don't actually do any training at all, therefore they would be completely exempt.
That's the whole crux of this discussion, lawmakers are passing laws not to actually do something meaningful, but to hide the truth of where they make their money within their own investments. It's not necessarily open AI or Claude or anthropic, it's the user. The technology makes this problem more readily visible, but it's still the person behind the keyboard driving it.
This is very similar to the p
reddit
Viral AI Reaction
1777050031.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohyiuto","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"scepticism"},
{"id":"rdc_ohymoz7","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_ohyy3ih","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_oi04mwa","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_oi1nlmw","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"}
]