Businesses often employ AI in applications to unlock intelligent functionality like predicting relevant product recommendations for customers. Recently, businesses have started building AI-powered applications that provide predictive functionality using sensitive information — a significant benefit to users. For instance, today, there are AI applications trained using medical records to assist in predicting diagnoses for patients and other applications trained using private emails to predict the next sentence to write in a text or email.
But, training predictive AI models using training data that includes sensitive information complicates compliance with data protection regulations. Data protection regulations in the context of AI require businesses to ensure that training data is safeguarded against unlawful access by unauthorized parties. Certain types of AI models, however, exhibit inherent characteristics that make protecting the privacy of training data a difficult task. For instance, if appropriate safeguards are not taken, predictive AI models, such as generative sequence models commonly used in predictive sentence-completion applications, can unintentionally memorize sensitive information included in training data. This unintentional memorization creates the risk that the predictive AI model will leak sensitive information, such as a business’s trade secrets or a user’s credit card number, as a prediction in response to a new, previously unseen input.
Read the full article at the link below.
©Copyright ML4Patents | Powered By Patinformatics