Earlier this week the The New York Times published an interesting must-read article entitled, “Which Workers Will AI Hurt Most: The Young or The Experienced?”

While it’s an interesting question, we do know that the rise of AI will impact jobs, and we are starting to see that in real-time across various industries (especially the tech industry) as AI can do parts of certain jobs, companies seek accelerated ROIs on AI, and companies want to reduce its costs (and especially publicly-traded companies to help drive up their stock prices).

The impact of AI on jobs in the legal industry will also undoubtedly be impacted and the article shared this example about the impact of AI on legal jobs in the paragraphs immediately below:

“Robert Plotkin, a partner in a small law firm specializing in intellectual property, said A.I. had not affected his firm’s need for lower-skilled workers like paralegals, who format the documents that his firm submits to the patent office. But his firm now uses roughly half as many contract lawyers, including some with several years of experience, as it used a few years ago, before the availability of generative A.I., he added.”

“These more senior lawyers draft patent applications for clients, which Mr. Plotkin then reviews and asks them to revise. But he can often draft applications more efficiently with the help of an A.I. assistant, except when the patent involves a field of science or technology that he is unfamiliar with.”

“I’ve become very efficient at using A.I. as a tool to help me draft applications in a way that’s reduced our need for contract lawyers,” Mr. Plotkin said.”

As AI technology continues to advance quickly, I believe it will increasingly be able to perform important aspects of the jobs of virtually every lawyer or legal professional in legal organizations.

While of course there may be some specialized human-centric things that lawyers do that AI cannot perform like going to court and making arguments on a client’s behalf, advocating for your clients during a face-to-face contract negotiation, or building deep and personal relationships with clients, leading AI solutions will increasingly be able to do more aspects of a lawyer’s or legal professional’s jobs over time.

By using AI tools, lawyers and legal professionals will also be able to free up more time so they can perform their jobs at a more strategic and higher level – but the reality is that we will probably need fewer of them over time.

Increasingly, we are seeing managers use AI tools to create performance reviews for their direct reports.

This will become a big trend as companies continue to reduce the size of “middle managers” and increase their span of controls to manage more people on their teams.

While I don’t think The Office‘s legendary Scranton Manager Michael Scott leveraged AI for his admin Pam Beesly‘s performance review, I’ve spoken to some people who feel their recent performance reviews were written in a very robotic and matter-of-fact type of manner – and perhaps by AI.

While there’s definitely benefits to using these AI tools as a starting point to help draft a performance review such as in the areas of data collection, completeness, saving time, and reduction of potential bias, I also think managers need to be careful and responsible when using these tools.

I think the ultimate output of any initial AI-generated performance review still needs to be carefully created in the voice of the manager and with real examples of an employees impact and opportunities for improvement.

Knowing how hard our teammates work and the important roles that managers play in the development and growth of their teammates, managers should use AI as a tool to jumpstart performance reviews.

However, in this AI era, managers shouldn’t completely abidicate their important responsibility in creating meaningful performance reviews to AI tools. If that’s the case, then perhaps we ought to have AI as our “check the box” managers instead of humans 😎.

In our AI era where AI can be used as both a tool and a weapon, it’s important for AI providers to earn the trust of their customers and regulators – and especially as our laws (largely with the exception of the EU AI Act) have not kept up with the rapid pace of AI advancement.

A wise person once told me that trust cannot be claimed as it must be earned. A vital way to earn trust in life – and in the business area – is to be openly transparent on a consistent basis.

So, it was great to see Microsoft issue its second annual Responsible AI Transparency Report a few weeks ago where Microsoft provides details of the steps it takes to build its AI solutions in an ethical and responsible manner.

The 35-page report is chock full of information regarding how Microsoft is transforming its Responsible AI Principles into actual practices, and I believe that Microsoft is the only major AI provider that issues such a periodic AI transparency report.

Both Legal AI solutions providers and legal services providers like law firms and alternative legal services providers should learn from Microsoft and consider preparing similar AI transparency reports for their respective solutions and services.

Nowadays, there are so many legal AI solutions to choose from as competition is fierce. Issuing an AI transparency reports would help legal AI solutions differentiate themselves from others and drive clarity to help demonstrate why and how they are trustworthy. It may also help legal AI solution providers secure much needed capital from investors.

In our AI era, increasingly in-house legal teams want to clearly understand how their law firms are embracing AI solutions to deliver better legal services and how law firms can deliver lower cost legal services when they adopt AI as a tool to help perform legal work. A periodic law firm AI transparency report detailing which AI solutions are being used by a law firm and how those AI solutions are offering law departments with higher value would be welcomed – and especially as BigLaw has habitually increased their fees to clients on an annual basis. These law firm AI transparency reports would also have the additional benefit of demonstrating how those law firms are adhering to the growing number of AI legal ethics opinions that are being issued across the US.

Earlier this week, it was reported that Amazon CEO Andy Jassy (and Andy and I both attended Scarsdale High School in the suburbs of New York City back in the day) said the size of Amazon’s workforce will shrink due to AI.

In a June 17, 2025 email to Amazon employees that was shared across the internet, Jassy communicated the following: “As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.”

Of course, Amazon is a huge company with a very large workforce. According to Statistica, Amazon has over 1.5 million employees, so it makes good business sense for such a very high profile and large employer like Amazon to continue to look for opportunities to manage its expenses.

However, Jassy’s email brings up a hot topic in our era of AI. Namely, what will AI’s impact be on the job market moving forward.

This is also a hot topic in the legal industry. What sort of impact will AI have on the future employment opportunities for lawyers and legal professionals?

My personal opinion is that the growth of AI over time will mean that companies and law firms will have less of a need for humans in the form of lawyers and legal professionals as advanced AI tools are increasingly able to perform some of the traditional repetitive and routine tasks – and at a high level as these tools get better over time – that have been performed by those in the legal profession.

In addition to the fact that AI tools will be able to do some of the work that is and will be delivered by lawyers and legal professionals, I believe that large in-house legal departments and “BigLaw” will reduce the size of their legal teams as AI advances largely because of these reasons:

  • “Rinse and Repeat”: I think large in-house legal departments and BigLaw will look at what the Amazons and other large tech companies are saying and doing regarding their employees in this era of AI, learn from their practices and essentially “rinse and repeat” what they are doing.
  • Increased Legal Budget Constraints: Given growing uncertainty in the geopolitical and business environments, legal departments and laws firms may face increasing pressures to reduce their budgets and operational costs. Assuming that is the case, deploying advanced AI tools and seeing their benefits to perform legal tasks may incentivize these organizations to accelerate their efforts to reduce headcount and increasingly rely on those AI solutions.
  • AI Return on Investment (“ROI”) Realization: AI solutions are not cheap, and they require significant financial investment. As AI solutions in the legal space become more mainstream, I believe that legal organizations will want to accelerate their ROIs on their respective AI investments in the form of optimizing their headcounts and reducing expenses.
  • Maximize Profits: Legal departments and BigLaw are in the business of maximizing shareholder and partner value. Any opportunity to use technology in a manner to both serve their clients well to enable business and to manage their costs are highly welcomed and rewarded in Corporate America.

So, the reality is that lawyers and legal professionals (like most job professions) will not be immune from the potential negative job implications of an increasingly AI-world.

What this means for lawyers and legal professionals moving forward is that they need to actively embrace and skill-up on AI solutions to help them do their jobs in a more productive, efficient and faster manner. Doing so will also enable lawyers and legal professionals to free up their time to perform even more high-value and high-impact work for their companies and their clients. As a result, it may help positively differentiate lawyers and legal professionals who are AI “power users” and perhaps make them a little less easily replaceable.

As AI becomes more pervasive and lawyers are seeking guidance on how to navigate our rapidly evolving AI legal and regulatory environment, we are seeing a big increase in continuing legal education (CLE) events regarding AI and the law.

These events are often produced by law firms, bar associations, law schools, legal tech providers, general AI providers and legal industry organizations. They can be delivered in-person, online via a webinar or a hybrid approach. Sometimes there’s a fee to attend these events and sometimes they are provided free of charge. Sometimes these CLEs may qualify for formal CLE credit for lawyers and sometimes they don’t.

I think it’s good that we are seeing an increase in AI CLEs as lawyers need to become more educated about AI and lawyers have an ethical obligation to understand the benefits and risks of using technology to help serve their clients. I try to attend as many of these events so I can continue to “skill-up” on AI and to obtain different perspectives in this area.

Recently I had the opportunity to participate in two CLEs regarding AI and the law – one as a “roundtable” participant in Chicago that was coordinated by ACC Chicago and hosted by the law firm Mayer Brown and one as a speaker on an AI panel that was hosted by Baker McKenzie in Washington, DC. I really enjoyed participating in these events and I learned a lot.

As we see this large influx of AI CLEs, it can be hard to determine which ones makes most sense to attend. Here’s my thoughts on what makes a strong AI CLE:

Offer Practical Takeaways: Like most CLEs, the best AI CLEs are the ones that provide the audience with real-world best practices that lawyers can actually use to serve their clients. Those AI CLEs that focus on providing practical steps for lawyers to navigate growing AI considerations versus focusing on legal theory regarding AI offer the highest value. AI CLEs would also be well-served by providing a “leave-behind” or other materials that capture such practical takeaways.

Basics of AI: I think it’s smart for CLEs to provide a short overview of the basics of AI at the very beginning of any AI CLE- and especially since many lawyers remain unfamiliar with the fundamentals of AI technology. Those AI basics should be delivered in a super easy to understand fashion and without a heavy dosage of technology jargon so that it can be easily consumed by a legal-centric audience. Also, consider whether it makes sense to provide a very short AI demo – and be sure that your demo actually works!

Have Great Speakers: While it may be very obvious, having excellent speakers will help make your AI CLE memorable and vice-versa. I continue to see some lousy speakers at AI CLEs (and at many non-AI CLEs) as we see the growth of many so-called “AI Experts” out there. Please take the time to conduct the appropriate due diligence to secure top-notch speakers – and especially if you are charging a fee to attend.

Balanced Speakers: Your speaker slate for an AI CLE should also be inclusive so that it can represent a wide range of perspectives. Of course, this focus on inclusivity is consistent with key leading Responsible AI principles that we have been seeing recently like fairness and inclusivity. Also, please be sure to avoid any “manels.”

In-Person vs. Virtual: There are various pros and cons associated with in-person versus virtual CLEs. While in-person events can be pricey to produce and requires a fair amount of coordination of logistics, they also provide better networking opportunities for speakers and the audience. I also believe that having an appropriate venue and food/refreshments are critical in making an in-person CLE experience successful. While webinars don’t offer the same networking opportunities like in-person CLEs, they typically involve lower costs to produce, and they can be scaled to reach a much wider audience on a remote basis.

Roundtable Format: As the intersection of AI and the law continues to be in its early stages, I’m a big fan of having informal roundtable CLE sessions about AI where there may be discussion leaders/facilitators for various AI topics and roundtable participants actively contribute to the conversations. I find that this roundtable format can enable an immediate and rich sharing of ideas and best practices on AI – and especially when there’s an understanding that Chatham House Rules are in effect.

Panel Format: Having AI CLEs that are structured as being more an AI topical-focused panel(s) are also highly popular in nature. In my experience, there no should be no more than 5 people per panel in order to provide panelists with equitable opportunities to contribute to the discussion and a highly skilled panel moderator is needed to keep the discussion moving forward.

Presentation Format: AI CLEs can also be delivered in a traditional presentation format whereby a presenter delivers his/her presentation to an audience via PowerPoint slides or something similar. If you decide to go down this route for an AI CLE, please consider having relatively short presentations that are no more than 15 minutes in length and perhaps in a Ted style-talk format as the attention spans for your audience will be very limited and there’s few presenters who can capture an audience’s attention for an extended period of time.

AI Legal Ethics: Obtaining CLE credits to account for state bar requirements are an incentive for lawyers to attend CLEs. The AI legal ethics area is actively evolving as we speak. In my view, educating lawyers on how to use AI in a responsible and legally ethical fashion is a topical area that is important and is growing in-demand.

Enable Audience Participation: The best AI CLEs promote very active audience participation by providing opportunities for the audience to pose questions to speakers – whether that is done live in real-time or via some technology option that is part of any virtual webinar. Carefully consider how your AI CLE enables audience participation in some meaningful fashion for the learning benefit of everyone.

Last week (on Valentine’s Day), the leading airline Air Canada was ordered to issue a refund to a customer who was misled by its chatbot. This case has received a fair amount of attention across the internet and social media.

The facts of the case are pretty straightforward. An Air Canada customer sought to obtain a bereavement fare for travel after the passing of his grandmother. The customer relied on information provided to him by Air Canada’s chatbot that he could apply for a refund retroactively after he purchased his ticket. When he applied for a refund, Air Canada informed him that bereavement rates would not be applicable retroactively on completed travel. The customer provided Air Canada with a screenshot of the bot’s advice and then he sued Air Canada in small claims court for the fare difference.

While Air Canada said that the correct information about bereavement fares could be found on its website and also maintained that the chatbot was a separate legal entity and was therefore not responsible for is actions, the court ruled in favor of the customer.

This “Canada Chatbot” case is an interesting one. First off, if I was providing legal advice to Air Canada, I would have advised them to provide the customer with the appropriate bereavement fare refund and to provide some suitable credit for future air travel to help avoid any potential legal claim and any associated potential negative publicity.

Here’s my thoughts on the AI-specific aspects of this case:

New AI Case Law: While this is only a small-claims court case, it shows that as AI becomes more prevalent across all industries, we will also see in increase in AI jurisprudence. We need to remember that in addition to the growing applicable AI rules and regulations, relevant legal cases will also significantly impact the development of AI law. Hopefully, lawyers and judges will increasingly understand AI in order to help shape meaningful AI law.

The Rise of Chatbots: As this case demonstrates, Air Canada, like many companies, use chatbots as a digital concierge to help serve their customers and to help enable smarter utilization of their human resources. As we see a younger generation of potential customers who have grown up with texting and using apps on their smartphone and better chatbot tools powered by AI technology will increasingly be available in the marketplace, we will see even more organizations use chatbots to help address questions from their customer base. In the legal industry there are growing opportunities for legal departments to use bots to serve their business clients, for law firms to use bots to convey relevant information with their clients and for our court systems to leverage bots to improve access to justice for citizens.

We Are Our Bots: The bots that organizations use to interact with the public are really extensions of their own organizations. They serve as an organization’s agents and representatives and it will be difficult for organizations to disclaim responsibility when their bots are supplying inaccurate information that customers are relying upon – and especially when those organizations are highly sophisticated and have “deep pockets.” Organizations that choose to use chatbots also need to carefully vet and select the providers who supply the underlying AI technology.

Proactive Chatbot Oversight: When organizations use bots to serve their customers, they need to make sure the data which they “feed” to the bot is relevant, accurate and constantly updated as they cannot act in a laissez faire manner. All organizations, including legal organizations, need to continue to properly oversee and maintain their respective chatbot solutions. For legal organizations, this active oversight function is similar to what lawyers need to do from a legal ethics perspective in overseeing and managing paralegals, legal professionals and technology tools like cloud computing.

Chatbot Transparency: If legal organizations are using chatbots to interact with the public or their clients, it’s also a good idea for those organizations to drive clarity that they are not interacting with an actual lawyer when connecting with a chatbot.

Deploying chatbots as a strategy to serve customers can offer a variety of benefits. Please make sure that you are smart and responsible when deploying chatbots.

It was great to be “back in law school” this past Friday to attend the “AI and Law: Navigating the Legal Landscape of Artificial Intelligence Symposium” at Northwestern Pritzker School of Law that was produced by its Journal of Technology and Intellectual Property in Chicago.

This terrific event was spearheaded by Northwestern Law Professor Dan Linna – who is an incredible legal educator. Professor Linna is also one of the foremost legal experts regarding AI and the law. He’s highly respected, his classes prepare his law students for the practical realities of the “real world,” and I have had the good fortune to learn from him.

Professor Linna and his team put together an outstanding agenda for this event as the conference attendees were treated with valuable insights from various leaders across legal academia.

As an in-house lawyer, I really enjoy attending these law school events on important topics like AI as it provides me with an opportunity to escape my own personal and professional “echo chamber” and to learn from legal leaders who are at the cutting edge of important issues in the AI area.

After introductions by Northwestern Law Dean Hari Osofsky and Professor Linna, University of Colorado Law Professor Harry Surden kicked off the event with a keynote entitled “Advances in Artificial Intelligence and Law: ChatGPT, Large Language Models (LLMs), and Legal Practice.”

Professor Surden’s talk provided an overview of Gen AI and he shared some thoughtful observations about GenAI and GPT-4 in the slides below. For example, he talked about how these tools are reasonably good – but you need to proceed with caution. He said that GPT-4 is akin to “a very good 3rd year law student” and that interesting comparison made good sense to me. Professor Surden also warned that these current GenAI tools have various limitations and struggle with certain scenarios like the following: “complex legal reasoning,” non-standard scenarios that are out of distribution,” “complex legal reasoning,” “hard cases of subjective judgment,” and “complete accuracy and reliability.”

The next speakers were Professor Sabine Brunswicker and Professor J.J. Prescott who spoke about using AI tools for delivering legal services. Professor Brunswicker talked about using AI chatbots, the role of empathy with chatbots and that empathetic chatbots may be more helpful to users (and I did not know that chatbots could actually be empathetic – but I guess they can be programmed accordingly). Here’s an interesting slide from Professor Brunswicker’s talk:

Professor Prescott explored the ability to AI tools to improve access to justice for citizens – especially given the significant expense of lawyers nowadays. There is a perspective that some of these tools may be viewed as a form of “second-class justice” for potential litigants – however, these tools are better than having no advice whatsoever from lawyers. There was also a discussion that there are plenty of opportunities for tech/AI to explain things to others, to make a litigant feel that she/he was actually heard and that chatbots helps lower the effort for people to actually find things when compared to Frequently Asked Questions (FAQs) type of documents. Also, as we have seen in our current tech world, some folks rather use/work with apps versus engaging with humans. Likewise, not everyone may want to engage with a human lawyer. There was also an interesting point posed by an Illinois state judge in the audience about the ability for AI tools to free up time for judges and courts to perform more important tasks for citizens.

The next set of speakers addressed AI regulation and privacy issues. Professor Bryan Choi shared his thoughts as depicted in the slide below that AI regulations are often premised on standards of care and that it may make sense to have a set of “vertical” standards based on key areas like data, learning and testing.

Professor April Dawson shared her thoughts regarding the topic of “Constitutional AI and Algorithmic Adjudication.” Based on a poll of attendees, it seemed like the audience trusted AI adjudication much less than traditional human decision-making in legal contexts. Professor Dawson wrapped up her talk with this terrific slide below where she concluded with these 3 key observations: (1) change/disruption is here; (2) lawyers need to understand this transformative AI technology; and (3) legal education needs to better educate law students. In fact, I think this slide nicely summarized the major takeaways from the conference.

While the next speaker was Professor Charlotte Tschider, I missed Professor Tschider’s talk as I needed to attended a work conference call.

After a lunch break, there was another keynote presentation. This keynote was delivered by Professor Pamela Samuelson on the important topic of the intersection of copyright law and AI, and it was appropriately entitled “Generative AI Meets Copyright.” Professor Samuelson delivered a very insightful presentation on this very important topic.

The final speakers provided their unique perspectives regarding AI and intellectual property. As an in-house lawyer, I appreciated Professor Nicole Morris’s practical suggestion below to avoid a situation similar to which happened to Samsung last year where some of their employees accidentally leaked company trade secret information to ChatGPT.

I’m really glad I invested the time to attend this excellent event as I learned a lot and I was able to network with so many smart lawyers, legal professionals and law students. The law students who have the ability to learn from the outstanding law professors who spoke at this event are super lucky!

Last year the internet went into a frenzy after pictures like the one above of Pope Francis wearing a puffer jacket were circulated. These images were created by generative AI and of course it demonstrates the powerful nature of AI technology.

The Pope has also been proactive in stressing the importance of ethical AI. It’s very significant to see a person of his great influence and stature highlight the importance of responsible AI.

Recent news reports indicate that the Pope and the Vatican rely on an AI expert named Friar Paolo Benanti to help shape their thinking on AI issues and to serve as their resident AI expert. The Associated Press recently reported about Benanti’s role in the AI area as well as an article in The New York Times. As a result, Benanti is gaining increasing notoriety across the globe for his visible leadership in advising the Pope and the Vatican on AI.

The Pope and the Vatican are very smart to enlist the help of an AI specialist to help guide them on the myriad of AI-related issues that we will be seeing in a growing AI world. In fact, I think here’s some lessons that the legal industry can learn from the fact that the Pope has an “AI Lead”:

Understanding the Benefits & Risks of AI: As part of a lawyer’s ethical duties when using technology to serve their clients, lawyers need to understand the potential pros and cons associated with using that technology – including AI. Law firms and legal departments would benefit from having AI-focused people on their teams – or as consultants – to educate them about AI, keep up to speed on the ever-changing AI tech and regulatory landscape and so they can be well-positioned to use AI tools in a responsible manner. Obtaining consistent help from an AI expert(s) would be both a smart business and legal decision for all legal organizations.

The Rise of AI “Chiefs”: As AI technology becomes more universally embraced, we will also see legal organizations be more focused on AI governance and hire AI-centric roles such as a Chief AI Officer, Chief Data Scientist, Chief Responsible AI Officer or similar senior AI lead roles. These senior AI roles will also have an important “AI Ambassador” component to them.

Multi-Disciplinary Skills: To be successful in a role of an AI Lead, that person needs to have very broad subject skill sets in key areas like technology, business, data, legal, compliance, privacy. etc…and effective ways of working skills like being a clear communicator, an effective collaborator, change management, embracing empathy, etc…

If the Pope and the Vatican have made the wise decision to invest in an AI leader so they can better serve their large community, legal organizations should also explore whether it makes sense to identify and secure the appropriate AI talent for their respective organizations so they can better serve their clients in a responsible and ethical fashion.

A must-read report entitled “Generative Artificial Intelligence and the Workforce” was released last week by The Burning Glass Institute and SHRM. This report was also featured in an article in The New York Times.

This report is interesting for the legal industry since it shares the following observations regarding the potential impact of GenAI in the legal industry:

  • Regulatory Compliance: “Examples of how AI will place certain occupations at high risk include:..Regulatory compliance, a task overseen by auditors, compliance officers and lawyers, demands thoroughness and accuracy. GenAI can facilitate quicker compliance checks with fewer errors.” (Page 5)
  • Repercussions by Industry: “The industries most likely to be affected include financial services, law and marketing research. For example, legal advisors face potential automation in creating standardized documents…” (Page 6)
  • Legal Occupation Affected by Gen AI: Legal was identified as a key occupation most affected by GenAI compared to previous automation waves. In fact, law offices were ranked with the second highest “GenAI Exposure Score” of 3.906 for occupations right behind mortgage and nonmortgage loan brokers. (Pages 13 and 16)

Of course, only time will tell regarding the true impact of AI upon the legal profession.

While I don’t believe that AI will be replacing lawyers anytime soon, there is no doubt that sophisticated AI tools will be able to perform and automate certain tasks – especially routine and repetitive ones – that have been traditionally performed by lawyers, paralegals and other legal professionals.

A key take-away from this report is that the legal profession needs to be open to learning more about AI and embracing AI tools to better serve their clients so that lawyers can practice law at the top of their law license.

I’m also adding this interesting graphic below from the report which provides a summary of workforce skills that will increase/decrease in importance with the rise of GenAI tools.

Hopefully our law schools will be teaching some of the skills identified above that will be increasingly important for lawyer success as AI technology advances and lawyers use more AI solutions to deliver legal services to their clients. Key skills like “AI Literacy,” “Emotional Intelligence,” “Continuous Learning,” “Critical Thinking,” “Digital Security and Privacy,” and “Creativity” will be even more critical for lawyers to invest in and build upon moving forward in an AI-powered world.