Our journey began in 2008 when Nikos, our co-founder and CEO, started developing optimisation solvers for supersonic aircraft, and fell in love with solver design. In 2012, he had the opportunity to work at Imperial College London on designing one of the most challenging types of solvers: a deterministic global optimisation solver for MINLP problems. Gabriel, our co-founder and CTO, was also doing his PhD at the same office, which is how our founders met and became friends.
Completing his PhD in 2016, Nikos’ research introduced innovative approaches to nonlinear solver design, including massively distributed calculations, internal automation of translating mathematics to code, and evolving solver design from the monolithic C/Fortran languages to more flexible C++ architectures without compromising performance.
In 2017, Nikos and Gabriel joined the Entrepreneur First incubator programme in London, leading to the inception of Octeract.
Our initial vision was ambitious: to create the best MINLP solver possible with the technology available at the time. Despite limited funding, recruitment challenges, and immense technical obstacles, we were determined to realise our vision.
Early on, we set several technological goals:
- Enable distribution of calculations across a network of machines.
- Automate low-level technical implementation at high performance to minimise bugs and maximise prototyping speed. The system had to be such that any algorithm conceivable would be implementable using very high level instructions, and the end result had to be comparably fast to writing a dedicated low level implementation.
- Design a modular core to facilitate custom extension without altering the core software.
- Support all types of mathematics.
Additionally, we aimed to meet commercial optimisation solvers’ user expectations:
-
- Ensure stability across a wide range of input problems.
- Maintain numerical robustness.
- Avoid “stupid” bottlenecks, such as poorly scaling algorithms that make a solver get stuck.Implement exceptional heuristics.
Crucially, the solver also needed to find good solutions quickly for users to adopt it.
After a year of development, we released the first beta version in August 2019. That version was unbelievably bad. We had compiled a test suite of about 1,600 problems, and three months before release, the solver was crashing on about 1,100 of them. Over two weeks where we fixed 100 crashes/day, our first release only crashed on 20/1,600 problems.
Although not embarrassingly unstable, that first release was unbelievably slow. In fact, it was bad in every sense that mattered to users:
-
- It crashed
- It had very bad numerics
- It got stuck on about 500/1,600 problems
- It failed to find any solution at all for 400/1,600 of those problems
Despite these issues, for us that release was a small technological miracle, because it embodied our core design principles:
-
- It had a scalable implementation of parallel branch-and-bound, which allowed its poorly implemented algorithms to be distributed across small networks of machines
- It was powered by the Octeract Reformulator, a purpose-specific compiler we built for a language that we designed specifically to write optimisation solvers. This technology allowed us to abstract away a lot of the low-level implementation into a compiler that effectively generated solvers from text.
- The code was reasonably well encapsulated in C++ classes which was a functional proof of concept for our novel modular design.
- The solver supported all types of mathematical functions, even some exotic ones that no other solver supports to this day. With the occasional crash of course.
All this technology was built in 14 months, starting from an empty text file. As a solver, it was abysmally bad. But it proved that our design worked.
It was around that time that our solver started appearing in the Hans Mittelmann benchmarks. First for the QPLIB quadratic problems, and eventually for MINLP problems.
For this case study we sourced the old results from the available datapoints on the Web Archive, and collated it into the chart below. We also took the liberty of pulling the 2021 number from our internal records since we weren’t on that benchmark yet at the time, just to illustrate how bad our performance was in 2021. That’s on different machines etc, but you get the idea.
References can be found here:
Unless you already know how to interpret benchmark results this won’t mean much to you, but stay with us, we’ll keep this very high level.
The first thing you need to know is that these are benchmarks on problems everyone has known for a very long time. Therefore, the challenge is simple: given 87 problems, what can you make your solver do?
Technological nuances aside, this chart tells us that the problems in this set are diverse and challenging enough that some of the best teams in the world, who have been in this space for much longer than us, have not yet been able to solve all of them.
Let’s now take this year by year.
2017
We began coding in summer 2018, so the first data point reflects the state of the art before our efforts.
2021
By 2021, we had stabilised the internal design and improved solver stability to enterprise standards. The plugin system worked, parallel scaling was effective, but core algorithms were lacking. We had only 6-7 heuristics and many bottlenecks and bugs, resulting in poor performance.
2022
Between 2021-2022, our work focused on making the core high-performance. All the low-level automation had to be reworked to be scalable for massive amounts of data. We also added more heuristics into the mix, taking us up to 18 heuristics. This work was enough to land us first place for the very first time in summer 2022.
This success validated our vision and demonstrated our team’s world-class capabilities. We then decided to see how just far this design could push technological boundaries.
With the core system finally in place and (mostly) working at high performance, between September 2022 and December 2022 we rapidly created hundreds of new algorithms, both for R&D and clients. This was reflected in our solver breaking the 80 problem barrier around December 2022, a result no-one had been able to achieve before.
The last few problems proved quite resistant to existing methods, so we decided to push the boundaries one step further: we started using our core for automated algorithmic generation.
2023
In 2023, armed with an AI that could generate and test algorithms autonomously, we could now make our clients very happy. As a by-product of our AI-based improvements, the last few problems melted away within a few weeks. In April 2023, this benchmark was solved to 100% for the very first time. Despite great benchmark results and how good that looked, our work on real problems told a different story. The really hard problems people come to us to solve would still not be solvable off-the-shelf. Meeting our clients’ requirements had always required human effort, and even our innovative use of AI did not change that. What did change was that we could now build much better solutions faster. It was at this point that we decided to focus on bespoke AI-based technology rather than conventional off-the-shelf solver solutions.
2024 and beyond
At the time of writing, no one else has been able to solve this benchmark, reflecting the uniqueness and benefits of our approach. Our AI system is now called Octeract Neural, and is forever being autonomously expanded with new heuristics and algorithms. The main benefit of this was that we could now rest assured that technological improvements were now in the hands of someone much more capable than ourselves – an AI.
After 5 years of unrelenting technical work, we suddenly had free time. When you’ve been working 12+ hour days for years to achieve a goal, it’s hard to describe how wrong it feels once that goal is done and you now have time to do something else. But there we were, and we just weren’t sure how to use it.
In the meantime, requests kept coming in that went beyond solver work. We used to turn down most such requests before, but we now had a lot of free time in our hands, and people could really use our help to tackle meaningful real-world issues. Thus, we decided to take on many more requests than before and help businesses much more directly by building autonomous end-to-end solutions.
At that point, we also realised that it no longer made sense to be part of any benchmarks.
The first reason was that we do not consider it exactly fair to have human developers compete against an AI. Neural has long solved all known benchmark problems since they’re part of its training set, but what this means exactly remains an open question. What we do know is that humans find it unfair and frustrating to compete against an unbeatable opponent, much like in chess.
The second reason was that as we shifted our focus to AI-powered bespoke autonomous solutions and consulting, we came to view conventional benchmarks as an arena better suited for off-the-shelf solutions. What we offer is very different, so there are no benchmarks that can quite capture the capabilities and nuances of what we can do – any attempt to do so would simply misrepresent the technology and raise fairness complaints. We are pleased to see ongoing human efforts to improve because there is always value in that, and we wish the other teams well, but, as we look into the future, this chapter for us is now closed.
This concludes the story of how our innovative, world-record breaking Engine came to be, and hopefully provides some insight into the value we add and why. Our software core is merely an enabling tool. The true value is produced by the systems it enables our experts to build on top of it, tailored to our clients’ specific needs.