2 Comments
User's avatar
Jack Maris's avatar

I'm interested in whether specific AI systems develop 'moats' in the legal system, or if instead we end up with a "bring your own model" approach. I'd bet on the former

Human expert witnesses gain credibility through their credentials (e.g., did you pass the bar?) But it wouldn't be difficult to make an LLM a) extremely legally competent (passes any test you ask, evinces no bias in subjects outside the case) and b) tailored to respond in some way in one particular case. Imagine: "Respond as an expert in forensics, who is convinced against any exculpatory evidence that the defendant was at the saloon on that night".

So to counter this there'd be a small set of 'industry-standard' off-the-shelf models, _or_ if you wanted to use a specific domain expert model that it'd require a length process where both sides agree on what data goes into the training set, how to avoid bias, etc.

Expand full comment
Deepak Subburam's avatar

Interesting. Yes, I hadn't thought of a "backdoor" like instruction that makes the model biased towards the current case but otherwise acts unbiased and therefore passes voir dire (cross-examination in court to evince competence and impartiality).

'Industry-standard' models may still be considered biased (e.g. Gemini is considered left-wing and Grok right-wing) though they won't have the backdoor instruction problem. Maybe there will be a variety of industry-standard models that each side will choose from and the jury decides which one is more credible.

Another way to solve the backdoor problem, especially if it is a major case like the Tik Tok case where even the 'industry-standard model' (e.g. Meta's Llama 3) can be suspected of deliberate manipulation, is to use a set of weights/version of the model that predates the case, by some margin.

Expand full comment