I’ve attended some legal industry conferences and events this month where in-house counsel have asked me questions about internal company policies regarding AI.
Some of these questions have ranged from should my organization have an AI policy to what should such a AI policy say to can you please provide me with a sample AI policy and other similar questions.
I think we will be seeing more questions in this area from in-house lawyers as AI solutions become more prevalent.
While I’m not in a position to provide legal advice to others given my role as a corporate counsel, here’s my guiding principles regarding the development of AI policies for an organization:
☑️ Policy or No Policy?: Senior company leaders in conjunction with advice from their legal counsel are in the best decision to determine whether an AI company policy is required or not.
☑️ Learn from the Cloud: Many organizations nowadays have a policy about cloud computing since cloud usage is so widespread. If your organization has a cloud policy, apply the learnings from that experience in creating a suitable AI policy.
☑️ Don’t Cut & Paste: There’s no “one size fits all” AI policy as you need to tailor one for the specific needs of your organization.
☑️ Clarity & Simplicity: Don’t overengineer your AI policies. Make them super clear and straightforward.
☑️ Policy Updates: Make periodic updates to your AI policy as needed. If you have a legacy AI policy, consider updating that policy given the developments in the generative AI space over the past year.
☑️ Policy Implementation: If you establish an AI policy, be sure that your organization follows through on its requirements and that you provide AI policy awareness and training to your teammates.
☑️ Get Help: If needed, seek outside counsel assistance in developing your AI policy. Exhange AI policy development ideas and best practices with other in-house counsel. Of course, if you want some initial internal AI policy ideas, perhaps ChatGPT can help you get started 😉.