Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At first they create a fragmented society, where everyone is lonely and believin…
ytc_Ugx75h0b3…
G
@lorenzogugliara45i didnt say they were bankrupt now. I said if openai did that …
ytr_UgzukWbOy…
G
Humans and their curiosity. Some of these things may lead to our own demise if n…
ytc_UgygkQf8H…
G
using artificial intelligence to perform actual idiocy instead of something actu…
ytc_UgxwuWezs…
G
Let me tell you how this pans out:
They conclude that humans can't ethically han…
ytc_UgzTqP4Hy…
G
Roblox refused to say "I'm in the wrong" and let AI literally do for them💀…
ytc_Ugy1dwEcV…
G
Can an AI be held responsible for a crime?
If so, what would the punishment be?
…
ytc_UgxuX6hTN…
G
Yeah, about the stealing part, you are terribly wrong. The master you copied did…
ytc_UgwZCvMQ9…
Comment
Maybe malice was the point, and their whole goal was to martyr themselves to set the precedent on how using AI to prepare a legal argument will be treated. Honestly, one could probably do a halfway decent job of using GPT 4 to speed up legal research, and potentially even have it fact check itself, but it would involve heavy utilization of API calls, the creation of a custom trained model that's basically been put through the LLM equivalent to law school, application of your own vector databases to keep track of everything, and of course, a competent approach to prompting backed by the current and best research papers in the field... not just asking it via the web interface "is this real?"
In short, their approach to using ChatGPT in this case is to prompt engineering what a kindergartener playing house is to home economics. All they really proved here was that they're bad lawyers and even worse computer scientists, but now that this is the first thing that comes to mind when "AI" and "lawyer" are used in the same sentence, what good lawyer would be caught dead hiring an actual computer scientist to do real LLM-augmented paralegal work? What judge would even be willing to hear arguments made in "consultation" with a language model?
I realize this thought doesn't get past Hanlon's Razor, of course. It's far more likely that a bad lawyer who doesn't understand much of anything about neural networks just legitimately, vastly overestimated ChatGPT's capabilities, compared to a good lawyer deciding to voluntarily scuttle their own career in order to protect the jobs of every other law professional in the country for a few more years... but it's an entertaining notion.
youtube
AI Responsibility
2023-06-10T20:4…
♥ 105
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugxnv-cXDekdE96sEeN4AaABAg.9qmuVxy3EeF9qn8olF60uC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxnv-cXDekdE96sEeN4AaABAg.9qmuVxy3EeF9qnIJ_bdkME","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxnv-cXDekdE96sEeN4AaABAg.9qmuVxy3EeF9qnJ5eQk4yq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzJQ_6XMOxAyJLMsK14AaABAg.9qmtJ9de6O49qnM04nY4LA","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzLYsffOmWHZSRITZF4AaABAg.9qmrvR-mUqL9qnB6HhAkUp","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzLYsffOmWHZSRITZF4AaABAg.9qmrvR-mUqL9qnDjXmj-BF","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzxXRh89ra3p-AtHf14AaABAg.9qmpqjETrs69qmq2Wxftiu","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwtS5LL7iLtrc1kfyV4AaABAg.9qmpd3v1iNL9qnDmA7KYWr","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwtS5LL7iLtrc1kfyV4AaABAg.9qmpd3v1iNL9qnImBiWtbY","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugwy5QNuQd6LmTLqQod4AaABAg.9qmogeX1sD19qnFWf0_b6b","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]