Tags
Note: Creepio, an AI, is a featured player among Auralnauts.
The current infatuation with Artificial Intelligence (AI), especially at the state bar which is pushing CLEs about how lawyers need to get on the AI bandwagon, is generally an un-serious infatuation with a marketing concept.
AI and LLM – language learning models, on which much of recent AI is based – has nothing to do with accuracy. So, for a legal practice or any kind of professional activity in which accuracy is priority number one, AI and LLM are a pipe dream. A lawyer cannot be wrong about whether a murder or misconduct took place in one out of every hundred cases. Rather, a lawyer needs to get the difference between the two events 100% of the time. But, AI/LLM is using predictive analytics – an algorithm – to decide whether a murder or misconduct occurred without consideration of the actual facts at issue.
Yes, there is much in life for which accuracy is not important. For those, AI/LLM will be incredibly useful for making connections across all of the data and meta data that is now being collected for us. Simply being better than a coin flip in guessing at something can be a major advance for certain kinds of work. More on that below.
Note: Most AI/LLM applications, so far, are less accurate than a coin flip. See ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate.
For now, there is a more immediate problem for lawyers. Participation by lawyers with their client’s information in this AI/LLM modeling is likely to run afoul of confidentiality concerns that lawyers have to maintain, as these AI/LLM models are designed without any confidentiality or are even intended to merge “data” across clients, all the while “futzing” what the models are doing with all of that data.
Bruce Schneier did a post on Slate that ars technica expanded on. The ars technica folks spotlight the lack of confidentiality built into AI/LLM:
We’ve recently seen a movement from companies like Google and Microsoft to feed what users create through AI models for the purposes of assistance and analysis. Microsoft is also building AI copilots into Windows, which require remote cloud processing to work. That means private user data goes to a remote server where it is analyzed outside of user control. Even if run locally, sufficiently advanced AI models will likely “understand” the contents of your device, including image content. Microsoft recently said, “Soon there will be a Copilot for everyone and for everything you do.”
Despite assurances of privacy from these companies, it’s not hard to imagine a future where AI agents probing our sensitive files in the name of assistance start phoning home to help customize the advertising experience. Eventually, government and law enforcement pressure in some regions could compromise user privacy on a massive scale. Journalists and human rights workers could become initial targets of this new form of automated surveillance.
Advertising is really just the surface of the problem, however. Confidentiality is antithetical to how AI/LLM functions, but this lack of confidentiality will also be hidden from us. Schneier has these details (hat tip from pixel envy for this additional info/link).
The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.
This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.
The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.
And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
So, if we are letting AI/LLM into our lives and into our work, we will also be letting AI/LLM use our “work” to make its own connections so that it can be effective in “helping” us.
At present, there are nearly no restrictions on what LLM/AI can do with the data you provide it (especially in the US). That license agreement with each new tech product presents a yes/no contract of adhesion. There is no negotiation about the substance; you either agree and get to use the software/hardware or disagree and be forbidden from using that software/hardware.
Which brings me to the accuracy issues. For actual legal work or any kind of work for which factual accuracy is necessary, AI/LLM cannot be trusted. No one should think that the first result from a google search is the complete answer. But, we know that google’s search engine – its AI – in general produced better search results than altavista. So, over time altavista declined in use as people switched their search efforts to google. As a result, most people reading this will not even know what altavista was.
For AI/LLM to succeed in general, it simply needs to be slightly better than what currently exists. And, in general, AI/LLM is being pushed into activities where there really is no current operations at all. All of this data that companies (and the government) have is too amorphous to do anything but supply the most basic of connections.
The excitement for AI/LLM right now is that it can create some order in an un-mapped wilderness. The accuracy that is needed for these tasks is little more than being better than no accuracy at all.
For example, imagine a government or company that wants to identify everyone in Wisconsin who has a lakeside cabin. Individually searching county property records for this information is a monumental task. Searching zillow.com is not much better. On the other hand, pulling together tidbits of data correlated with ownership of lakeside cabins could lead to a dataset that has better than 50% accuracy for a fraction of the cost. AI/LLM is being designed to do this kind of correlation.
This expansive use of correlation is what has tech companies (and numerous governments) so excited. At a fraction of the cost in personnel and time, they can gain access to information that is somewhat accurate.
And, there are, in practical terms, no limits on how this information is collected nor in how it is used. In the civil rights context, we know of massive databases being collected regarding our phones:
- Apple and Google push notification data provided to feds
- AT&T automatically providing phone records to feds via Hemisphere project
This problem is where the legal profession should be entering the picture. Rather than as a consumer of AI/LLM, legal professionals should be considering how to monitor and administer AI/LLM. Schneier explains:
If we want trustworthy AI, we need to require trustworthy AI controllers.
We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.
The legal profession should be leading the way to establish limits and guardrails for AI/LLM that starkly and obviously limit how these systems work and the expectations they can create among lawyers and the public at large. Until then, legal profession should have little to nothing to do with using AI/LLM in their legal practice. In other words, stay away from Creepio.