Understanding How Generative AI Can Affect Your Business’ Data Privacy And Ownership Is Crucial
Understanding How Generative AI Can Affect Your Business’ Data Privacy And Ownership Is Crucial
This article originally published in HR Legal & Compliance Excellence on HR.com on May 2, 2024 and is republished here with permission from the publication.
“In assessing a generative AI product, it is critical to understand issues of data ownership and privacy. This cumbersome task is necessary to learn how the AI platform will use data, if the data shared is entering an open or closed system, and if the data is used for a large language model,” said Leonard Dietzen and Jacey Kaps, CIPP/US, Partners at RumbergerKirk.
In an exclusive interview with HR.com, Leonard and Jacey, RumbergerKirk partners with extensive experience in AI governance, share insights on the legal intricacies and best practices for effectively integrating AI into organizational structures. They also delve into how AI will impact regulatory compliance, data privacy, and intellectual property protection, and offer solutions for navigating the evolving AI landscape.
Excerpts from the interview:
Q. Considering the evolving landscape of global data privacy regulations, what factors should companies consider to ensure compliance and mitigate risks when integrating AI technologies?
Leonard & Jacey: To start, organizations must be familiar with the regulatory environment surrounding data privacy wherever they conduct business. Regulation increases depending on the sensitivity of the data involved, such as requirements for the healthcare and financial sectors. An organization that conducts business in Europe must also be familiar with the General Data Protection Regulation (GDPR).
Given that each industry’s data use is regulated by many governing bodies, a company implementing new AI products should engage professionals to assess its needs, which can range from data privacy to legal counsel. If the assets exist, an organization can also create an internal AI task force to investigate new AI products and vendors to determine if they have adequately and transparently identified how the company’s data will be used once the product is launched.
In assessing a generative AI product, it is critical to understand issues of data ownership and privacy. This cumbersome task is necessary to learn how the AI platform will use data, if the data shared is entering an open or closed system, and if the data is used for a large language model.
A company can minimize this risk by negotiating in the vendor’s contract how they will be indemnified if there is a data breach or use in violation of state, federal or global regulations. It is important to understand that most AI product terms are drafted in a fashion that is skewed heavily for AI companies, and the indemnity offered is frequently of nominal value.
An organization should clearly define its expectations about the use of confidential information, such as personally identifiable information and healthcare data when using AI. Disclosing this kind of information in an open environment creates significant risks, but organizations also must set forth acceptable use in a closed environment. In addition, it must determine which employees have access.
The AI task force should meet frequently to consider these issues and clarify AI policy. This, along with frequent employee training, can minimize risks.
Q. What are the major concerns organizations face when using AI? How can they be addressed?
Leonard & Jacey: Organizations must develop an AI policy. It should clearly state the acceptable uses of this technology for work, what data is not approved and which products are approved for use. Once the policy is in place, employees should be trained and sign off that they have received the training and understand the consequences of non-compliance.
Data security is a major concern organizations face when using generative AI, particularly open AI. Therefore, the AI policy and training should ensure employees understand the nature of using open AI products versus closed AI products and the potential impact on the organization’s loss of data security.
Another major challenge is ownership of processes derived from generative AI. For example, if an employee creates software using generative AI, the ultimate owner of the product should be clearly defined by the AI policy or in the express terms of an employment agreement. This work helps organizations avoid acrimony or the loss of valuable processes if an employee wishes to leave and deems the process or software their property.
Another major challenge is keeping up with the rapid pace of new AI technology. Organizations must prioritize the AI task force’s recommendations and implement changes. Keeping up with the legal regulations and security risks associated with AI will be a day-to-day challenge, and proper delegation within the task force is key.
Q. Following in Florida’s footsteps, the New York State Bar Association has recently adopted guidelines for the ethical use of artificial intelligence. What are the takeaways for employers?
Leonard & Jacey: AI is here to stay, and companies that do not develop their governance frameworks and learn to adapt may find themselves behind their competitors. Many organizations are surprised that their employees are already using AI products without any guidance from management or HR.
AI impacts too many legal and business applications for one compliance officer to manage.
Companies must work with corporate counsel to learn how to both protect their IP rights and maintain ownership of their new AI-assisted creations. Clear policies and contract language with employees and vendors when using AI technology should help alleviate these issues. The U.S. Patent and Trademark Office has provided some guidance when using AI to create innovations and assets.