Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the ai is replacing jobs
Who will earn and who will buy
And ultimately less…
ytc_UgwqS0noZ…
G
What about the human art you didnt find emotionally gripping or impressive, and …
ytr_UgxkyUSf7…
G
It feels like David got actually hacked and the hacker banned shlep to start dra…
ytc_UgzVjVu5v…
G
Imagine an AI that uses the Bible as its legal authority...Everyone will be judg…
ytc_Ugy5tcq6L…
G
Lol it's backwards. Billionaires want people to think AI is bad, because it take…
ytc_UgzLxkb7p…
G
Hello Dr. Hawking,
I shared your concern until recently when I heard another AI…
rdc_cthxlxg
G
I am proud of anthropic. If possible, I will buy something for them. Who wants t…
ytc_UgxmnI-f3…
G
What folks seem to not care is that AI writes shitty papers. Yeah, AI could "wri…
ytc_UgwS1MgaC…
Comment
I think you stated the one key to properly using AI systems at this time ... to help with legal research. They can search through reems of data and case law and rulings and documents to find items to look at. By that I mean YOU the lawyer look at them to see if relevant or correct. That can be an incredible time saver and they can find things (in the complexities of law) a person is not aware of. But, they do make mistakes and mistaken interpretations of the material so it needs to be reviewed and organized for human use. And, if you use a tool it must be structured or created FOR that purpose (searching legal documents), and not general chat (I heard on the street from the taco vendor that ....). Otherwise you will get crap as we see lol.
I sort of wonder if LexisNexis or Westlaw as mentioned in this video are working on dedicated AI's with their tools to do exactly this. That would be perfect .. train an AI to search all the material they already have to find all the proper references and potential pieces and such (with all cited case numbers lol).
Lastly -- I want to see what is making ChatGPT claim these are real cases. I am more interested in how the AI can effectively be taught to lie or be so stupid and not have it detected, either on purpose or accidentally. As not being able to detect these falsehoods is a far more serious problem than people might understand at this point. Believe me I know how errors like this can spiral in common software systems out of control. Imagine this within AI systems and self learning....
youtube
AI Responsibility
2023-07-04T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy3d2l8HlEE3dy6IAZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuOoIgcKkO-vl0U_t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEM3qctoQ1NB2E6RJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwr4KXbFjPketOxaNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzAlAOKDiaBQK5hh-J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwafL6P-pK40mZxUcl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVcajFKwg9PnMqbMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsL6WeUUO8q2Lj9Eh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdpctoZsrh1a4ZX714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFrcuYxhJVzuGKstx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]