If the word “algorithm” causes you to recoil in horror, you are not alone. Algorithms are sometimes painted as a means to manipulate information on social media. While that perspective may be valid in some instances, it’s myopic. The truth is that the solutions algorithms can offer the world far outweigh the problems.
In her recent webinar, Dr. Aurélie Jean shares how algorithms were used in the battle against Covid-19. During the pandemic, algorithms have been used to:
- detect the virus
- estimate how the pandemic evolves
- predict outcomes
- aid in hospital logistics
- determine the prioritization of vaccine distributions
Most notably, algorithms were responsible for the accelerated development of multiple vaccines across a highly collaborative global scientific community.
While we hope there’s not another pandemic in the future, Jean says if the next virus is similar to Covid-19, then everything scientists have collected and developed over of the past year can be used again. And even if it’s not exactly like Covid-19, we have still learned valuable lessons on how to develop data, what data to collect, and how to best collaborate to train this data.
Jean also stresses that it’s important to continue to use critical thinking and to not just blindly trust an algorithm. Algorithms simply provide you with a suggestion – you can reject the solution if you have a personal perspective on biases that might repudiate the results.
Learn more from Jean about how algorithms work and what their limitations are here.
"Hopefully, the crisis will be an opportunity for everyone to understand how algorithms work – and be in the position to be a 'power user' of the technology and not just a 'simple user.'"

Q&A
Jean covered a lot of ground during her webinar, but there are always questions from participants we’re not able to address. We’ve collected them and curated their answers below during a follow-up interview.
How seriously should policy makers take algorithmic predictions with regards to the pandemic?
They do need to take these predictions seriously — but whether or not they have the critical sense to really use those predictions the right way is a whole other topic. The lack of scientific knowledge leaders have in general is an issue — so the governance of a country should be aligned with scientists. Scientists need to be a part of the administration.
Some have brought up implementing an “algorithmic code” similar to a doctor’s Hippocratic oath – is this one possible way to manage ethics and biases with regards to algorithms?
So, this actually already exists in some companies where there are guidelines and rules to follow in terms of algorithmic development, testing, back testing, etc. So now it’s a matter of how can we make those guidelines already being implemented more transparent so that other companies can be inspired by them and implement them as well – and then have that roll out at the state level as guidelines or mandatory regulations to follow.
These are “unprecedented times” — the pandemic, the pacing of vaccine development, the general population being more aware of algorithms (if even only due to its more familiar relationship with social media). What do you hope all this change means for the future? What lessons should we be carrying with us to continue to enhance, develop, and hopefully even build a little more trust – especially with concepts people aren’t really aware of so there’s bound to be more distrust.
I’m a big believer that if you want to fight against what individuals consider a threat, you have to make sure that people understand how things work. Why? Because once you understand how something works, you can now understand its impact on you and on your actions. And I think, in some way thanks to this crisis, we now understand (and are still in the process of understanding) that algorithms can be used in many ways, for many applications, and for positive reasons. And we also begin to understand their limitations and opportunities as well.
Hopefully, the crisis will be an opportunity for everyone to understand how algorithms work – and be in the position to be a “power user” of the technology and not just a “simple user.”
Using the technology versus it using us.
Yes, exactly.
We talked about the relationship between data points and the human connection. There’s only so much an algorithm can do before humans step in to manage things – can you expand on the limitations on both sides (human and algorithms) and ways to counteract those weaknesses to bring out the strongest relationship possible between the two?
Yes, so as we mentioned, we are human beings – so we have biases - which is normal! Trying to erase those biases is useless because those biases exist and will always exist. And not all biases are bad so we should not try to remove them. These biases can help us understand others and how society learns.
Humans have biases and we also tend to make mistakes. Machines do not really make mistakes. There might be a bug, but it’s not really a mistake in the way that we as humans make a mistake – because we’re tired or forgot to check something off the process list.
Machines and algorithms in general do what we ask them to do. So obviously machines and algorithms can’t run independently without humans. For example, let’s say you train an algorithm to play a game – then you ask an algorithm to play another game. Most likely the algorithm is not capable of playing that new game because they don’t have the situational awareness or intelligence that humans do to adapt.
So, in other words, we really need to keep the human in the loop through this close collaboration between the algorithm and the human. Not only because this determines how the algorithm should evolve, but when we include a diverse set of humans we can better understand how the algorithm is learning and this provides us with useful feedback to improve the algorithm itself. All of this can help detect biases and bugs.
One could say “Oh, let’s just not use an algorithm and do everything by ourselves.” In reality, if we were doing that, we wouldn’t be as fast in the development of the vaccine and so many other things. So, I think we need to be humble and say “Ok, we’ve been using machines for a long time and been assisted by machines for a long time – it’s a great thing.” For example, no one wants to go back to horses instead of cars.
It’s a matter of making sure everyone is involved in this evolution of society and that we are developing good technologies for the right purpose at the right time. And just because an algorithm can do something, doesn’t mean we have to use it. Sometimes it’s important to have a human ultimately decide on a course of action because it involves emotional qualities, like happiness, satisfaction, that a machine just isn’t capable of reproducing or processing.
Can you speak to challenges you think governments are facing? For example, here in the US we seem to really be behind with regards to vaccine logistics and distribution. What’s the ideal relationship between government and the scientific community to take full advantage of all algorithms can do? What role should government have in the process? Any best practices already out there?
The real problem is that most leaders — except maybe Angela Merkel — do not have science or engineering backgrounds. So, it’s a big issue because we absolutely need state leaders who, if that’s not their background – are at least surrounded by and interact often with credible scientists and not only when a crisis occurs. (Because if we wait until a crisis happens to establish these relationships, it’s usually too late).
There’s a big hope with the current administration in the US because they’ve decided to have a Science Office, with actually a former MIT professor as a main advisor, and that’s a big deal!
So, it’s important for state leaders to really appreciate science and also put money into research related to those areas. And if they’re surrounded by people that are knowledgeable, they can at least begin to understand the underlying mechanism of the science … understand what it can do for us and where to invest.
What was your most exciting finding regarding the use of algorithms for Covid-19?
The vaccine development acceleration. I don’t know anything about vaccines – it’s not my field, I’m not a medical doctor. I was like “Wow! What they did was really impressive” and yet people don’t know about it! So we need to communicate that more to show people what we can do with algorithms and how they can really help in a crisis.
When we hear about algorithms, people tend to see the negative or the glass half empty, which I think is really sad. It’s not just something used to manipulate social media. We can do so much more with it and be so much better because of it.
You may also be interested in our Executive Education course:
Recommended Resources from Dr. Jean:
- Forecasting the long-term trend of COVID-19 epidemic using a dynamic model
- The challenges of modeling and forecasting the spread of COVID-19
- Machine learning models for covid-19 future forecasting
- Examining Deep Learning Models with Multiple Data Sources for COVID-19 Forecasting
- Rapid triage for COVID-19 using routine clinical data for patients attending hospital: development and prospective validation of an artificial intelligence screening test
- Artificial Intelligence for COVID-19 Drug Discovery and Vaccine Development
- Artificial intelligence in supply chain management: A systematic literature review