Member-only story

How to Build Trust When Building AI🤝?

Angelina Yang
5 min readAug 4, 2022

--

A few posts ago, we introduced how to test NLP models from the lens of software engineering, and shared some examples of results from such tests against several well-known large language models.

Testing is just one of the guardrails for building trustworthy AI applications.

As machine learning is making more and more decisions for us, and ML systems are becoming more and more complex, it’s important to step back and ask the question, how can we trust machine learning?

Let’s step back one more step: How can we trust?

Dr.Carlos Guestrin from Stanford quoted three areas of trust in his recent talk:

  • Technical competency — Does my service provider know what they are doing?
  • Interpersonal competency — Do they communicate well with me what they are doing?
  • Agency (loyalty) — Do they have my best interest in mind?

Core trust principals in machine learning

As ML developers, we first need to understand and trust what we are building, and then we can demonstrate to our users why they can trust what we built. There are three core principals to build trust in machine learning, they1 are:

  1. Clarity
  2. Competence
  3. Alignment

Clarity is about understanding and communicating well what’s been built. We would like to understand why a machine…

--

--

No responses yet