AI can be used to target enemies or to save lives. In this conversation, John Jaeger describes his journey from Chicago’s trading desks to alerting the population in Syria’s civil war about incoming airstrikes. With Hala Systems, he has established a new technological standard for documenting war crimes and monitoring conflict zones and borders.
Founder, Hala Systems
John Jaeger founded Hala Systems in 2015 with David Levin. Previously, John served as Director of Technology for financial firm PEAK6 Investments, Director of Product for OptionsHouse, and worked for the U.S. State Department.
Hala Systems is a for-profit social enterprise active in the domains of public safety and law enforcement, as well as national security and defense. It leverages AI, open-source intelligence, and proprietary data to monitor and analyse events. The company operates in multiple locations, including some of the most difficult places on earth. It has been primarily funded via non-dilutive grants and received early investment from multiple investors, notably Mark Cuban.
You had executive and technology roles in the past. You worked for an equity options trading company based in Chicago, and later joined the U.S. State Department. What motivated this step?
That was a massive pivot for my career. Although very challenging to me, I found that technology problems only scale upwards in complexity and size, not necessarily in breadth and variety. Making that shift away from being a sort of engineer working problems from a technology perspective opened up my brain to solving problems more focused on humans and human interaction.
The movement into the State Department was unexpected, most of all by me. The story behind that was that I had my mind open to working just about anywhere, as I was recently married and we had decided, as a family, that we wanted to live abroad. I had never lived outside of the US. And so I had to cast a fairly wide net through my network to say, ‘Hey, does anybody know anything I could do?’
I was introduced through personal connections to a part of the State Department working on fascinating problems and taking a fairly innovative mindset in solving those problems. And they took a chance on me, I took a chance on them, and we did extraordinary things together. When I joined the State Department, by seeing their work up close, I gained an appreciation for the type of person who goes into this service. It doesn’t pay as well as the private sector, the hours are terrible, and the exposure to harrowing, difficult things is endless. But people want to improve the world by working through their government. It also brought me into community with an exceptional group of human beings, Syrians in this case, who were trying to make their country a better place amid a horrific war. And they didn’t want to pick up guns to do it. And it was really through working with them and trying to find different ways to solve those problems and create safety for people who didn’t opt into that conflict that I got the idea to start this company, Hala.
When you worked for the State Department in Turkey for three years, you got a glimpse into the horrors of war. What work did you do there that led to this idea of Hala?
My work for the State Department primarily involved collaborating with peaceful activists, primarily in the realm of citizen journalism. We identified that there were a lot of young people who had their educations or their early careers interrupted by this horrific war. They wanted to share the story with the rest of the world about what was happening in their country. A lot of international news organizations were not able to operate in Syria after the outbreak of the war because it was an incredibly dangerous place. That left just state news, which was an apparatus of the people who were in charge. And so they wanted to get a different story and perspective out.
It was through my work with them and through the stories that they told that I got a front row seat, very much from the safety of an adjacent country, but into the incredible and awful things that people were doing to each other in Syria. Chief among those was the use of indiscriminate violence. You had incredible amounts of bombs dropped on civilian communities and infrastructure. I just thought that really can’t just be allowed to happen. Someone should do something about that. There was talk of establishing a no-fly zone, but it didn’t materialize. And I thought, OK, well, what if we could warn people using intelligence sources and methods before bombs dropped on them? It doesn’t stop anybody from dropping a bomb, but it gives people some agency.
We had a front row seat to the increased availability of open-source intelligence. The Syrian civil war was the most well-documented conflict in human history up to that point, and the war in Ukraine has now even surpassed that. However, at the time, citizen journalists — regular people with social media accounts — were documenting extraordinary events and posting them online, where anyone could access them. We had access to incredible amounts of information.
And so, I thought we could utilize this massive increase in intelligence, both open-source intelligence, which includes people posting on YouTube, Twitter, etc., and commercially available intelligence, such as satellite imagery that has increased in availability and quality, and decreased in price as more people have utilized it. These developments were later joined by the revolution in computing and availability of foundational models.
We are all dealing with this now, as all these competing foundational models, with tens of billions of dollars of research poured into them, are being given away. So you combine this availability of information with unimaginable amounts of computing power, and you can do some really neat things. The genesis of Hala was, ‘Hey, can we utilize intelligence that’s already out there, process it using AI or machine learning techniques, and use that to generate something of value for people to help them make better decisions or take action?’ And specifically in the case of Syria, we could deliver this information to people and help them save their own lives in the case of indiscriminate violence.
Often, people just observe things and do nothing. There needs to be a moment of inspiration and, obviously, a team or a co-founder to get started. How did you meet your co-founder, and when did you decide to give it a go?
My co-founder, Dave Levin, was just as annoyed and frustrated by the kind of traditional approaches to solving these problems as I was. He came from a slightly different background, was more educated and experienced in the consulting world, and had accomplished some amazing things. We were introduced by mutual friends and hit it off immediately. He moved to Istanbul without much hesitation, and we teamed up with our third co-founder, an insanely talented Syrian engineer named Ayman Al-Malki, to tackle this problem. And we did it because no one else was going to, and it was one of those moments when you think to yourself, ‘When I’m 90 years old, what will I regret?’ One of the things I have identified as a potential regret in the future is not doing this. When, truly, there was no one else who was naive enough to try. Very few people believed that it was going to be possible. But some investors, some very early supporters, took a chance on us, as did the US government at the time. They provided the initial funding, albeit modest, for us to implement this pilot. And it worked, to everybody’s surprise. And it worked probably better than I imagined it could.
What different problems does Hala solve?
It’s not viable as a business to simply serve as an early warning system technology provider for airstrikes launched by a country’s government against its population. Thankfully, that’s a very niche problem. And I hope it remains so. But being able to address challenges anywhere that access to high-confidence, timely intelligence insights makes a difference is well within our remit.
We have committed ourselves to several missions that fall under two primary categories. One of those categories is public safety and law enforcement, broadly speaking, and the other is national security and defense. Now, those are pretty pointy words, and a lot of people have very emotional reactions to them, but they really do encompass a huge amount of potentially positive impact for people.
Under the public safety and law enforcement label, we conduct a significant amount of work related to the prosecution of war crimes. Investigating war crimes or things that are atrocities or international crimes related to conflict requires huge amounts of investigation and documentation in order to lead to effective prosecution. It can be extended into more domestic law enforcement work, but we have not yet tackled that. However, the documentation of evidence of things happening, the investigation of crimes, especially horrific crimes that are fairly large in scale and leave traces, is a particularly good problem set to tackle with intelligence analysis, remote sensing, and some of the other methods we use.
On the national security and defense side, the topic is early warning, which refers to the ability to inform people before an attack is imminent. That means choosing a space, which could be a border, a military base, or shipping lanes in the ocean. The goal is to watch as closely as possible for any signs of something interesting happening that could indicate to a human that they might want to examine this more closely.
I’m simplifying it somewhat, but it’s essentially about scanning a large portion of the planet using various sensors over a specific spatiotemporal period. So that could be a satellite. could include things said in the open-source realm. It could be aircraft transponder data. It could be all sorts of different things. We’ve applied our skill sets from monitoring ceasefires to trying to return abducted children. It is a wide variety of problem sets for which remote sensing and intelligence analysis are good tools to use.
Let’s take a concrete example of what you did in Syria. You alerted the populace about incoming airstrikes and saved lives as a result. Your approach involved both hardware and human plane spotters. How did this all work together to save lives?
We’re agnostic about which data to use. We aim to collect as much relevant data as possible, enabling us to process it and apply analysis to derive actionable insights.
In the case of Syria and the early warning system for airstrikes, the best source of information that we thought we could analyze and make a prediction from was actual human spotting of those aircraft, either in flight or in a takeoff-ready posture. We found that many people were already doing that job. There was a very ad hoc network of spotters who had decided to take on this task and were reporting into WhatsApp groups and Telegram groups in the early part of the war. They were saying things like, “In this village, we think that there might be something coming.” But it was very noisy and they weren’t really working together. The first thing we did was just get them all to work together with us, reaching out to them and developing relationships. It was old-fashioned diplomacy, suggesting that we can accomplish a lot more if we all work together. The people who did that job were doing so at risk to themselves, and their only goal was to warn people before someone dropped a bomb on them.
The next thing we wanted to do was leverage their incredibly practiced skill of identifying airframes and transform the unstructured reporting they normally did into a structured format, so we could utilize it with technology. We created a very simple Android application that they used to essentially convert their normal tasks into structured reporting. Very simple reporting, too. This type of aircraft, whether it’s a fixed-wing or helicopter, appears to be heading in this direction and roughly in this location. That’s it. That’s the data that we captured.
And with one of those reported sightings, you can sort of create a cone of possible attack. If you know how fast airplanes go, how much fuel they’ve got, and where they generally fly, and where they’ve attacked before, you can illustrate a cone of potential targets. The more sightings of that aircraft you can get, the better. At one point, we had pretty awesome coverage of the air above Syria. So we would sometimes have a few sightings of the same aircraft, and we could disambiguate them to some extent. So the more reported sightings of that aircraft, the narrower that cone becomes. We had different thresholds for different levels of warning. However, as soon as we reached a certain confidence level, either because we had multiple sightings or because we actually received reports of an aircraft conducting pre-attack reconnaissance above a target location, we would trigger the creation of a warning.
The warning was initially sent out via Facebook Messenger, Twitter, and other available channels. Again, we were agnostic on how to deliver that information. We wanted to get information into people’s hands where they were already collecting it, so they could use it. Ultimately, what we ended up doing and realized was that the best way to get this information out is the old-fashioned way. Air raid sirens or warning lights. We created hardware, which was fairly simple in some cases. One was a relay or an actuator. It simply triggered an air raid siren when a specific parameter was passed to it from our system.
We also deployed warning light systems in hospitals. There are numerous hospitals and medical centers within the non-regime-held parts of Syria. They have developed their own health infrastructure; they were simply trying to serve the needs of millions of people who had essentially been on the wrong side of the line, as dictated by the state’s political leaders. We developed a warning system utilizing lights in collaboration with medical organizations and hospitals, as sirens are not permitted during surgeries. One of the warnings was that there is an imminent threat to your location, and another that there is a general threat to the area, so people need to be on a slightly higher level of alert. And the last warning was that there has been a multiple casualty event in the area, to prepare for incoming triage.
We deployed these warning lights in hundreds of locations. And those would again be triggered with this warning criteria I described. For schools, we created buzzers with slightly less terrifying noises. We turned the system on around midnight on August 2, 2016. It ran until about mid-to-late December 2024, when the war concluded.
A follow-on question: People were probably happy to have at least some early warning. But with the sirens and all, this had an official character. What was scarier: false positives or false negatives?
We asked ourselves that question quite a bit in the several years that we operated the system. There are a couple of things to pick through there. One is the concept of false negatives and the other is that of false positives. It is not as obvious to define as we originally thought. And we got asked this question a lot: how many false positives did you send out? Well, define a false positive. If I tell you in your village that there is an aircraft approaching that is carrying bombs that it might drop on you, but it flies over your house and goes to the next village. Is that a false positive?
The people in those affected communities said no; We want to know when there is a threat. I would say there were a few, although surely not zero, instances where we just sent out a warning that had no connection to a plane flying around with bombs on it. But there were a lot of planes flying around with bombs on them. We sent out numerous alerts.
At one point, there was a pretty sustained campaign against the eastern part of Damascus by the Syrian regime. It was brutal. We had alerts multiple times an hour. And we reached out via our contacts and asked if we should shut the incessant alerts off? It must have been just exhausting people. What came back was a resounding no. And we had never thought about it this way: The alerts were accurate for what people were using them for. As long as the sirens weren’t going off, people would move, check on their family, go get food, go get water, and relocate themselves. So, the presence of the siren was jarring; it was exhausting, but they were using the times when it wasn’t going off to move with some degree of security and safety. So they said: Please do not turn down the sensitivity on this. We need to know when we’re under threat, even if that threat is seemingly incessant. It was a real surprise, frankly.
I didn’t think that was one of the ways it would be useful, but it turned out to be. People were willing to endure a heightened sense of fear if it meant they would know when they would no longer need to be afraid. If you don’t have that information, then you lack the agency to ever not be afraid. You remain afraid 24/7 without something that you can count on with some degree of confidence. And we were not perfect. We didn’t see everything. We didn’t hear everything. We didn’t warn everyone effectively. But it was way better than nothing. That’s the decision you have to make in a situation where there is no optimal solution. Optimal would mean not to drop bombs on your own citizens. That’s the best-case scenario. We were very far away from that. We believe we saved many lives, and our estimate of the system’s effectiveness was very conservative.
Let’s turn to the other topic that you mentioned, the documenting of war crimes. Some exponents of the Syrian regimes have received sentences. In June, a court in Frankfurt sentenced a doctor, a brutal torturer and murderer, to life imprisonment. What kind of war crimes can Hala document, and how do you do that?
I will caveat all this by saying I’m not a lawyer. However, we didn’t set out to document war crimes. We didn’t set out to work in accountability or law enforcement or any of that. But what we found was that because of how closely we had to watch what was happening in the country, and the way in which we were doing it had to be fairly technologically sound, meaning good data integrity, good storage techniques, good data hygiene, because we needed to use clean, good data and lots of it in order to predict where harm was going to happen. And then afterwards, we needed it to review how well we did, right? So that we could iterate and improve the system. And so we were; we ended up being very closely observing everything that was happening, at least from an air war perspective. That led us, naturally, to helping people who were doing that kind of work, specifically the investigatory work into atrocities. And there were many committed. Not just by the Syrian regime, either.
There was a specific targeting of civilian infrastructure and healthcare infrastructure, things that were on lists that said, ‘Don’t blow this up.’ They blew it up intentionally. They targeted gatherings of civilians. They did really nasty, movie-villain-type stuff because it was effective. It was the regime’s approach because it didn’t have an endless amount of resources. It was a way to try to reduce the resolve of the people who were resisting them.
We were really careful about saying “evidence” versus “potential evidence” versus “information”, because we are not adjudicators. We are not activists. We are not advocates. We are just observers. However, it was fairly uncontroversial to say that some really bad things were being done systemically and with malice aforethought. We were also able to observe some of that premeditation. We were able to observe those actions. We were able to document command and control actions, who ordered whom to do what. We accomplished this primarily through the monitoring of unencrypted radio communications.
Some of the spotters we were working with were using radio frequency scanners to listen to conversations between pilots and air traffic controllers, as they were unencrypted. We realized that this was very interesting, but that nobody that we knew of was actually capturing all of this. We deployed some listening capabilities and then started collecting. We captured something like 80,000 communications, and in some cases, they’re very explicit about what they are about to do.
We contributed to a New York Times article that won a Pulitzer Prize. In that particular situation, Russian bombers were targeting health care infrastructure that was on the UN no-strike list. And they were pretty brazen about doing so and unafraid. Nobody had responses to fixed-wing bombers in Syria, at least not in the Syrian opposition or non-regime-held areas. And so they weren’t afraid of talking because there was no one who was going to shoot at them, at least at that stage of the conflict.
We were the ones who were getting this. We realized that we need to become much smarter and better at capturing this sort of data, analyzing it, and deriving insights from it. We began working extensively on audio processing and achieved some impressive results using AI. Starting from the simple, which involves the transcription and translation of audio in certain languages, all the way to entity extraction, reasoning, and more advanced intelligence processing of those conversations. And those conversations are pretty loud evidence of potential war crimes.
Again, I am not a lawyer. I don’t get to decide what is a war crime and what is not. However, as someone responsible for observing, we get to decide how to take care of that information, how to preserve its integrity, and how to ensure it doesn’t get deleted or lost, while also making it available to those working on issues like transitional justice in Syria. You have a country that is essentially starting from scratch again. Part of any successful post-conflict order is the implementation of transitional justice and reconciliation, as well as holding those accountable for the crimes they committed, even if forgiveness is later extended.
You mentioned the work that Germany is doing with what they call universal jurisdiction. This is innovative and amazing. There’s not a huge amount of innovation in the international accountability space. It’s a fairly slow-moving, technophobic kind of field. And I don’t mean that in an insulting way. It’s just that the law moves and changes slowly, probably for the best. But Germany has actually been incredibly innovative in saying, “we are going to hold people accountable even if they’re not here, even if they’re not in the country that they committed that crime in”. I think it’s extraordinary work. Given the kind of world in which we live, where there are spoilers to any truly international approach to justice and accountability, innovation is necessary because you cannot simply lock things down so that no one’s ever held accountable for committing terrible acts.
One reason I really looked forward to this conversation is the somewhat dystopian image one can get about the role of technology in conflict. AI can do bad things; we may soon have autonomous killing systems. Technology can make conflict much worse. However, as we have heard, it can also be used to keep scores and hold people accountable for their actions. Do you think that the utopia of AI or other technologies being a force for peace is still possible?
Probably not. It’s hard to imagine utopias in the world we live in. I think we give AI way too much credit, both in its positive and negative aspects, to be honest. I think it’s an extraordinary technical development. But it’s still just a tool. There was a huge uproar about Israel using AI to do target selection. The fact is, we have been utilizing technology to facilitate more effective targeting for as long as we’ve been doing targeting. I think that getting angry about the technology and its application is missing the point a little bit. We need to be angry at the people who are employing technology in ways that we don’t think are right for humanity. It’s not the use of the software that makes the indiscriminate bombing of civilians wrong. Software is software. I don’t want autonomous killing, for sure. Everything that distances a person from the person they’re doing harm to makes killing easier. That is bad for us as a human race.
However, I also don’t believe in ceding or surrendering the use of technology solely to those who are using it for weaponry. That is also what I hear from the humanitarian side. Technological tools can and will be used for harm. There isn’t much we can do about that because they’re really good tools. If we don’t start using those tools to build a more inclusive, healthier, kinder society, we’re just helping the people who are using them to do harm. I think we have to contest that space. We must utilize all these technology tools to our advantage to accomplish the things that we believe make the world a better place.
I’m not anti-defense. I believe that there are certainly scenarios where the use of force is the most effective course of action. But there are lots of alternatives to dropping bombs or using drones. And I don’t think we explore those nearly as exhaustively as we do exploring the use of these technologies for more violent means. And I think that’s a mistake. And again, I don’t blame the defense industry for that. I blame the peace industry for that. It’s incumbent upon those who make peace to seize the opportunities to utilize these tools and put resources to work, thereby innovating in that space. It’s not up to the Department of Defense. That’s not their job. They’re doing their job. Do yours.
Let’s discuss the peace industry, then. What kind of investors invest in PeaceTech?
Not many, and we don’t even use that word. There have been attempts to label technologies as peace tech or humanitarian tech. I think those became radioactive. It doesn’t work to divide defense technology into camps that are for peace and for war. Different companies have different values, and they apply those values in a way that makes sense to them and their board or their shareholders. Our investors are awesome and basically believed in our double bottom line. It’s a false choice between doing good and doing well. We don’t want to make that choice. We can work with defense and intelligence and still maintain our values. We are not generally beating off Sand Hill Road venture capitalists with a stick. We cannot guarantee a 10000% return on our work. We run a successful business, we generate a reasonable amount of revenue and we are improving our margins by having real, commercially viable products and capabilities, and a go-to-market.
I do believe in running for-profit social enterprises. More people should do it. It’s incumbent on us to demonstrate how the good we’re doing is actually profitable. People working in this space need to improve at quantifying impact, as investors require a clear understanding of its value. If we can’t explain how we’re making the world a better place or what sort of positive impact we’re having, then no one should give us money. That’s our job. There’s still a long way to go in this space before investors feel informed and safe enough to invest heavily in pure peace tech or humanitarian tech. There aren’t many companies that are excelling in that space.
Let’s conclude with a look at the foreseeable future. As you said, issuing early warnings for armed conflict is too narrow a niche. You mentioned that with your capabilities, you can monitor a variety of spaces or borders. There are numerous geopolitical choke points, such as the Strait of Hormuz, that are worth monitoring. How do you see HALA evolving in the next few years?
With the analytical capabilities we’ve brought to maturity, we will focus on people who generate that intelligence, such as satellite imagery providers or intelligence, surveillance, and reconnaissance platforms. Those who have a lot of data, but not necessarily the ability to process it to derive meaningful insights. I think we’ve got a lot to offer those folks. I think that the applications within national security and defense, search and rescue, disaster response, and public safety and law enforcement are myriad. We’re going to explore how we can apply our analytical capabilities and processing capabilities to solve those problems by partnering with organizations that collect imagery, for instance, or audio.
The last six months have been pretty wild. There is a significant shift in Europe’s attitudes towards pragmatic actions in defense of the continent. And I think that yields a lot of opportunities. American venture capital is pouring into European defense tech companies because everyone has seen the recommitments to NATO obligations, and that’s going to create a huge amount of opportunity for those who are there to help Europe’s capacity in defending itself.
One of the significant limitations, not just for Europe, but for anyone in effectively conducting intelligence work or defense work, is human resources. There are simply not enough adequately trained people who can be hired to do all this work. And so the only way that Europe and the rest of the world, frankly, is going to increase its capacity at handling and processing and deriving value from intelligence is going to be through technical means because we simply can’t make an infinite number of human intelligence analysts. Therefore, we must rely on technology and innovative approaches to solve these problems in order to have any chance of meeting our obligations and aspirations. Our organization has a significant amount of potential value, and we’re exploring ways to harness that value while remaining true to our core values. We’ve got a 10-year history of trying to be on the right side of everything.
Written by
WITH US, YOU CANCO-INVEST IN DEEP TECH STARTUPS
Verve's investor network
With annual investments of EUR 60-70 mio, we belong to the top 10% most active startup investors in Europe. We therefore get you into competitive financing rounds alongside other world-class venture capital funds.
We empower you to build your individual portfolio.
More News
10.02.2022
“This breakthrough gives hope to many”
For the first time in history, patients with complete spinal cord injury were able to walk again. ONWARD will make this technology available to patients in the coming years. CEO Dave Marver explains what this scientific achievement means for the company.
30.01.2022
“The flywheel is gathering speed”
In this interview, Verve Ventures’ co-founder and co-CEO Lukas Weber looks back at the company’s achievements in 2021.
31.05.2019
“We’re selling actionable information”
PathoQuest’s CEO Jean-François Brepson explains how a new test indirectly helps fight “superbugs” and why he left a big pharma company to lead a startup.
Startups,Innovation andVenture Capital
Sign up to receive our regular newsletter and learn about investing in technologies that are changing the world.