Women in AI: Heidi Khalaf, Security Engineering Director at Trail of Bits


To give AI-focused female academics and others their deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI ​​revolution. As the AI ​​boom continues, we’ll be publishing several pieces throughout the year, highlighting key work that often goes unrecognized. Read more profiles here.

Heidi Khalaf is the director of engineering at cybersecurity firm Trail of Bits. He specializes in evaluating software and AI implementations within “safety critical” systems such as nuclear power plants and autonomous vehicles.

Khalaf holds a Ph.D. in Computer Science. Received. from University College London and a B.S. in Computer Science and Philosophy from Florida State University. He led safety and security audits, provided consultation and review on assurance matters and contributed to the creation of standards and guidelines for safety and security-related applications and their development.

question and answer

Briefly, how did you get your start in AI? What attracted you to this field?

I was attracted to robotics at a very young age, and started programming at the age of 15 because I was fascinated by the possibilities of using robotics and AI (as they are inextricably linked) to automate workloads. Where they are needed most. Like the manufacturing sector, I see robotics being used to help the elderly and automate dangerous physical labor in our society. Although I wanted to get my Ph.D. Received. In a different sub-field of computer science, because I believe that having a strong theoretical foundation in computer science can allow you to make educated and scientific decisions about where AI may or may not be appropriate, and where it may cause harm. .

What work (in the AI ​​field) are you most proud of?

Using my strong expertise and background in security engineering and safety-critical systems to provide context and criticism where needed on the new area of ​​AI “security”. Although efforts have been made to adopt and cite well-established safety and security techniques in the field of AI security, various terminology has been misinterpreted in its usage and meaning. There is a lack of consistent or intentional definitions that compromises the integrity of the security technologies currently in use by the AI ​​community. I’m especially proud of “Toward Comprehensive Risk Assessment and Assurance of AI-Based Systems” and “A Threat Analysis Framework for Code Synthesis Large Language Models,” where I debunk false stories about security and AI assessment. , and provide concrete steps on bridging. The security gap within AI.

How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

Acknowledgment of how little has changed the status quo is not something we often discuss, but I believe it is important for me and other tech women to truly understand our position within the industry and be realistic on the changes needed. It’s important to have perspective. Retention rates and the proportion of women in leadership positions have remained largely the same since I joined the field, and that was a decade ago. And as TechCrunch rightly points out, despite the tremendous successes and contributions of women within AI, we remain isolated from the conversations we have defined ourselves. Recognizing this lack of progress helped me understand that building a strong personal community is more valuable as a source of support rather than relying on DEI initiatives that unfortunately haven’t moved the needle much, given that the technology Prejudice and suspicion towards women is still widespread. Technique.

What advice would you give to women wanting to enter the AI ​​field?

Not to appeal to authority and to find a line of work that you really believe in, even if it contradicts popular narratives. Given the power of AI labs politically and economically at the moment, there is a tendency to take as fact anything said by AI “thought leaders”, when what is often the case is that many AI claims are marketing platitudes that AI Exaggerate the profit making capabilities of the company. A bottom line. Yet, I see significant hesitation, especially among junior women in the field, in expressing skepticism against claims made by their male peers that cannot be verified. Imposter syndrome has a strong hold on women in tech, and leaves many doubting their own scientific integrity. But it is more important than ever to challenge claims that exaggerate the capabilities of AI, especially claims that are not falsifiable under the scientific method.

What are some of the most pressing issues facing AI during its development?

Whatever advances we see in AI, they will never be the only solution to our issues, either technologically or socially. The current trend is to incorporate AI into every possible system, regardless of its effectiveness (or lack thereof) in many domains. AI should enhance human capabilities rather than replace them, and we are seeing a complete disregard for the pitfalls and failure modes of AI that are causing real harm. Just recently, ShotSpotter, an AI system, caused an officer to shoot a child.

What issues should AI users be aware of?

That’s how incredible AI really is. AI algorithms are extremely flawed and high error rates have been observed in applications that require accuracy, precision, and security-criticality. The way AI systems are trained incorporates human bias and discrimination into their output which becomes “real” and automated. And that’s because the nature of AI systems is to provide results based on statistical and probabilistic estimates and correlations from historical data, not based on any kind of logic, factual evidence or “reason.”

What’s the best way to create AI responsibly?

To ensure that AI is developed in a way that protects people’s rights and safety through the creation of verifiable claims and to hold AI developers accountable to them. These claims must also have scope for regulatory, safety, ethical or technical applications and must not be falsifiable. Otherwise, there is a significant lack of scientific integrity to properly evaluate these systems. Independent regulators should also assess AI systems against these claims as is currently required for many products and systems in other industries – for example, assessed by the FDA. AI systems should not be exempt from standard auditing procedures that are well established to ensure public and consumer protection.

How can investors better push for responsible AI?

Investors should engage with and fund organizations that are seeking to establish and advance auditing practices for AI. Most funding is currently invested in AI laboratories themselves, with the confidence that their security teams are adequate to progress AI assessment. However, independent auditors and regulators are key to public confidence. Independence allows the public to have confidence in the accuracy and integrity of assessments and the integrity of regulatory outcomes.

Source link

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *