• An Expert Perspective on Applying Machine Learning and Artificial Intelligence Algorithms Effectively

    Q&A with Dr. Joe Marks

    Machine learning (ML) and artificial intelligence (AI) are driving innovation and are used by myriad entities in many industries, such as technology, finance, and health care. Analysis Group Principal Almudena Arcelus discussed some of the challenges that these technologies raise and the consequences for intellectual property (IP) rights with affiliate Joe Marks, Executive Director of the Center for Machine Learning and Health at Carnegie Mellon University. Dr. Marks previously was a vice president and research fellow at Disney Research and the research director at Mitsubishi Electric Research Laboratories.

    Q: Based on your recent work and interactions with industry stakeholders, what are the most pressing issues that companies face in using ML and AI technologies?

    Joe Marks - Headshot

    Joe Marks: Executive Director, Center for Machine Learning and Health at Carnegie Mellon University

    Dr. Marks: First of all, it’s important to have realistic expectations – AI and ML aren’t magic! They are algorithmic tools that can give us a better way to manage and analyze massive datasets, similar to older methods from statistics and economics. Given that, identifying appropriate and convincing use cases is important.

    Also, one of the most pressing issues is obtaining, structuring, and storing the datasets used by ML algorithms. Another is incorporating human knowledge and judgment into AI/ML-based systems.

    Finally, every software engineer is familiar with the concept of version control, which is basically the management of changes to computer code. There is an analogous concept for ML, except that now it can be even more complex when you have to manage changes to code, algorithm parameters, datasets, and the learned models. If you don’t manage these changes properly, it can be very difficult to know if you’re making progress, and for some applications you may run up costs doing unnecessary or counterproductive experiments.

    Q: What are some emerging ML and AI techniques and solutions that you are seeing companies implement to create better products and services for consumers?

    Dr. Marks: In the health care domain specifically, it’s interesting to see that the best solutions for a number of problems come from classical AI, such as heuristic search, knowledge representation, and natural language processing (or NLP). Those are different from ML, in that the algorithms don’t learn from data but follow certain rules coded into the program essentially by hand.

    I am particularly intrigued by hybrid human-AI systems being developed that are combining human knowledge and judgment with the various AI techniques. For example, health care systems have all kinds of scheduling problems for people, equipment, and facilities. Black-box AI or operations-research systems rarely succeed for these problems: There are just too many real-world, amorphous issues involved for them to work. But if you can combine human knowledge and judgment with the computer’s ability at combinatorial optimization – that is, finding an optimal solution among a large but finite set of options – you can build practical systems that health care professionals will use.

    I am also very interested in the intersection of AI/ML and behavioral economics. One can think of AI/ML as an approach to generate insights into data or solutions to problems. But what do you do once you have that insight or solution? Very often the answer involves behavioral economics.

    Q: Speaking of the human element, how can companies identify algorithmic bias in their systems, and what strategies can they apply to alleviate it?

    Dr. Marks: There are two main classes of bias: nonrepresentative training data for ML, and bias in the learned ML model. For example, if you train an ML algorithm with health data from just one hospital, the data may be biased because the patients who go to that hospital skew wealthy or old, or do not have a representative distribution of genetic backgrounds. The resulting ML algorithm will make predictions or detect patterns that are accurate for the biased set of patients in the training set, but that may not be accurate for a national or international patient cohort.

    Bias in the learned model can be more insidious. For example, an ML system for evaluating mortgage applications should not be given training data that includes data about protected categories, such as the identification of a minority group. However, the absence of explicit data about the minority group does not mean the ML algorithm may not produce a model that may disadvantage this group for evaluating loan applications. For example, some unintentional combination of data elements – zip codes, educational attainment, first and last names – might correlate strongly with a certain demographic group and not with another.

    Unfortunately, there are no silver bullets when it comes to eliminating biases from data or algorithms. I put my trust in transparency and peer review. If the data and the algorithms can be examined by others, then there’s a good chance that potential biases will be noticed and corrected over time.

    Q: You have worked in an unusually large number of industrial verticals: defense, computers, industrial and consumer electronics, media and entertainment, health care, marketing, and retail. Are there commonalities across all of these sectors, or is everything new and different in each one?

    Dr. Marks: It’s been gratifying to see how much I have been able to bring with me from one industry to the next, often along with some unique perspectives that aren’t immediately obvious if you haven’t had that broader exposure. I find that categories of applications or technologies cut across multiple companies where I’ve worked. For example, various projects at Mitsubishi Electric, Disney, and CMU all had something to do with cameras looking at people, albeit for different reasons. Other examples might include intelligent user interfaces, and the intersection of AI and behavioral economics, which I’ve already mentioned.

    Q: Not everyone who has worked in technology R&D has an interest in IP protection, but you like working on patent litigation. Why is that?

    Dr. Marks: Three reasons, basically. The first reason is the intellectual challenge. Maybe it’s because I am an expert-level chess player, but I really enjoy sorting out complex patent issues.

    Second, my long history in the industry gives me something of a unique perspective. A lot of the patents currently being litigated concern technology that was developed 15 to 25 years ago. Over that span of time, I was working at three of the great industrial R&D labs in applied computing: DEC Cambridge Research Lab, Mitsubishi Electric Research Labs, and Disney Research. We worked on a huge breadth of applications involving novel technologies such as computer graphics, virtual reality, human-computer interaction, artificial intelligence, machine learning, computer vision, speech recognition, and the internet of things, many of which figure prominently in current patent litigation. So often when I am brought in as an expert witness on a patent case, I know people who worked on the technology, and I have firsthand knowledge of many of the original projects.

    And finally, it motivates me to keep my coding skills fresh, because that kind of hands-on knowledge is particularly useful in technology IP matters. If you’re a computer scientist, actual coding is a rewarding and creative activity. But if you follow a managerial track, as I have, it can be hard to make the time to work on software. So I try to always have one current project where I’m writing code or reviewing code. At the moment I am writing software for an AI-based recommender system involving medical literature, and having a great time with it!