Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i already got such an ai call.
it was 100% my brothers voice and he asked me to…
ytc_UgyMnxpU7…
G
I jailbroke mine on accident. Guess I just didn't trust the guard rails . Hahaha…
ytc_Ugy94WNgW…
G
Yes, we need a branch of government. Yes, everyone needs a license and yes all A…
ytc_UgzlydZ86…
G
He constantly refers to agents but seems not to understand. Agents simply serve…
ytc_Ugzr6Jm5y…
G
What im hearing is... we have already lost our options our own choices giving th…
ytc_UgwQqsW5K…
G
It definitely has its quirks! The interaction between AI and humans can feel qui…
ytr_Ugw9FEhas…
G
So theoretical physics and other hard problems need to be solved. But AI/ AI age…
ytc_UgwMuzCXE…
G
AI is here, and still coming, and there’s nothing anyone can do to stop it.…
ytc_UgztalquA…
Comment
Question about how different/novel this situation actually is.
Lawyers often have _human_ assistants -- junior partners, paralegals, interns, whatever -- to whom they farm out case-law research tasks, right?
You mention "placeholder arguments". I'm guessing this means you'd say, "assuming I can find a case-law citation saying X, then..." and then you'd write a few paragraphs of argument based on that, and meanwhile your assistant is burning midnight oil searching for a citation to fill in the blank. Am I close?
Well then... consider a situation where one of these assistants screws up -- either making the "human error" of misinterpreting a case, or less understandably, deciding to just fabricate something real-sounding to satisfy your query.
And then you include that in the argument you present in court.
And get called on it.
Q: How does this situation differ from one in which the "assistant" is ChatGPT?
youtube
AI Responsibility
2023-06-11T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxGo27u7ONF6cXfpeJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzL3NIefV2bWDtN2At4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3y-DofzAmDHY2cIJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmiXz878yGbZ9fhcB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzP2zFRbYxRoPP5KvR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLXb5GQQIMwfQI9NJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLOgVgADR38zbjT114AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCLFYV-TO_2XIMZwJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwpoE099XWtdDJewIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyJvsWH7ku0atYvXB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"}
]