Two recent releases with a lot of QP related news.
<dependency>
<groupId>org.ojalgo</groupId>
<artifactId>ojalgo</artifactId>
<version>56.2.0</version>
</dependency>
<dependency>
<groupId>org.ojalgo</groupId>
<artifactId>ojalgo-clarabel4j</artifactId>
<version>0.1.0</version>
</dependency>
What’s new?
ojAlgo-clarabel4j is a new 3:d party solver integration, integrating Clarabel via clarabel4j. To be honest the integration is kind of half done. Plain QP works great, but setting up all the various types of cones for more complex constraints is not yet done. There is partial support for Second-Order Cone Program (SOCP) but nothing for Exponential Cone Program, Power Cone Program or Semidefinite Program (SDP). Integrating with ExpressionsBasedModel, SOCP will be the primary cone type to support. Clarabel4j integrates with Clarabel using Java’s new Java’s new Foreign Function and Memory (FFM) API, and requires Java 25.
In ojAlgo itself the QP solver has been tuned and tweaked in several ways making it slightly faster and more robust. Then there is also a new null-space projection pre-processor (eliminating equality constraints and reducing the number of variables). Applying this, or not, may have a huge impact on performance. The problem is that it’s not so easy to determine when it should be applied. There is default logic to enable/disable this, but you really should experiment to see if it helps your case or not. (It currently does not work in combination with extended precision.)
/**
* Null-Space projection. (Eliminating equality constraints and reducing the number of variables.)
* <p>
* TRUE means yes, FALSE no, and NULL auto. Even if configured to TRUE there must also be both
* equality and inequality constraints for this to actually be used.
*/
public Configuration projection(final Boolean projection) {
myProjection = projection;
return this;
}
v56.2.0 also contains many sparse linear algebra improvements. Particularly in relation to R064CSC (sparse compressed columns) and a new SparseQDLDL decomposition (sparse quasi-definite LDL). This is all part of ongoing work towards an all new OSQP-based QP solver. The idea is that this will complement (not replace) the existing active set solver. Based on what I’ve seen so far the active set solver is very accurate, fast on smaller models but may struggle to find the optimal “active set” on larger models. The OSQP solver (so far) is not as accurate but appears to scale much better.
Performance
Ran the same benchmark as in the LP & QP Performance Report (the QP 1k Maros and Meszaros benchmark) but with JOptimizer replaced by Clarabel4j and the newer ojAlgo version.

CPLEX and Hipparchus are the same (versions) as before, and nothing has changes. CPLEX handles 94% of the cases and Hipparchus 82%.
JOptimizer, that performed really poorly, has been replaced with Clarabel4j – and it performs really well. Apart from good execution times it handles all the cases (100% success rate).
ojAlgo generally performs better than in the last benchmark (slope of the curve is slightly flatter) and it handles one more case, bringing it to a 94% success rate (same as CPLEX).
Looking at the slopes of the trend lines, you may get the impression that Clarabel4j scales worse than CPLEX, and that the Java solvers scale really poorly. I believe the truth is more that CPLEX has significant overhead when calling native code, and this overhead is very much visible with the many smaller models in this set of test models. It’s an old JNI-based integration, while Clarabel4j is FFM-based (and therefore requires Java 25). The Java solvers have no such overhead at all. If we could get this overhead out of the equation the slopes would be more equal – I believe. The Java solvers would of course still be slower, but the slopes of the trend lines would be more similar.
Comparing the Java solvers ojAlgo is consistently about 1 order of magnitude faster than Hipparchus, and ojAlgo has a success rate of 94% while Hipparchus is at 82%. An interesting note is that with the 2 models ojAlgo fail to solve Hipparchus succeeded, and with the 6 that Hipparchus failed ojAlgo succeeded – they complement each other.
Benchmark details, raw results and execution logs can be found here: https://github.com/optimatika/ojAlgo-mathematical-programming-benchmark/tree/master/results/2026/01
