Dynex: Testing Efficacy and Performance of DynexSolve Proof-of-Useful-Work (“PoUW”)
DyneSolve is operating “Neuromorphic Chips” which supposedly outperform traditional computing approaches for a range of computational tasks. In this article we will dive into the DynexSolve chips and will do a hands-on test of their performance relative to available other state-of-the-art software. The Dynex platform supports different computation types, for example “SAT/MAX-SAT” or “MACHINE LEARNING”. In this article we will focus on SAT problem solving, using a standard Ubuntu 22.04 machine.
We will cover the following topics:
- Neuromorphic computing? Huh?
- Dynex and neuromorphic computing
- Benchmark problems
- State-of-the-art technology
- Efficiency of DynexSolve in comparison
Neuromorphic Computing? Huh?
Neuromorphic computing was coined by Carver Mead in the late 1980s and at the time primarily refers to analogue-digital implementations of brain-inspired computing. In recent years, however, the term neuromorphic has come to encompass a broad range of hardware and software implementations as the field continues to evolve and large-scale funding opportunities have become available for brain-inspired computing systems, including the DARPA Synapse project and the Human Brain Project of the European Union.
The term neuromorphic computer refers to non-von Neumann computers whose structure and function are influenced by biology and physics. Data and instructions are stored in the memory units of Von Neumann computers, which consist of separate CPUs and memory units. On the other hand, in a neuromorphic computer, both processing and memory are governed by neurons and synapses. Unlike Von Neumann computers, neuromorphic computers define their programs based on the structure of the neural network and the parameters of the network rather than by explicit instructions. Also, while von Neumann computers encode information as numerical values expressed in binary terms, neuromorphic computers receive spikes as input, which are encoded numerically by the associated time at which they occur, the magnitude and the shape of their output.
Neuromorphic computers are well documented in the literature, and their features are often cited as motivating factors for their implementation and utilization. An attractive feature of neuromorphic computers is their extremely low power consumption: they consume orders of magnitude less power than conventional computers. This low-power operation is due to the fact that they are event-driven and massively parallel, with only a small portion of the entire system being active at any given time. Energy efficiency alone is a compelling reason to investigate the use of neuromorphic computers in light of the increasing energy costs associated with computing, as well as the increasing number of applications that are energy constrained (e.g. edge computing applications). As neuromorphic computers implement neural network-style computations inherently, they are a natural platform for many of today’s artificial intelligence and machine learning applications. The inherent computational properties of neuromorphic computers can also be leveraged to perform a wide variety of different types of computation.
Dynex & Neuromorphic Computing
Dynex has developed a proprietary circuit design, the Dynex Neuromorphic Chip, that complements the Dynex ecosystem and turns any modern device into a neuromorphic computing chip that can perform orders of magnitude faster than classical or quantum methodologies for a wide range of applications.
The proprietary Dynex Chip design is built based on ideal memristors. Memristors are two-terminal resistive devices with memory. In general, their nonlinear dynamic behaviour is mathematically modeled by means of a differential algebraic equation (DAE) set, in which an ordinary differential equation (ODE) governs the time evolution of the memory state, while an algebraic relation captures the state- and input-dependent Ohm law. The memristor, an acronym for memory resistor, was theoretically introduced in 1971 by L.O. Chua. Introduced in the 1971 pioneering paper, presently referred to as ideal memristor, is the fourth fundamental two-terminal circuit element, the other three being the resistor, the capacitor, and the inductor. Since then, the interest on memristors and their applications has been growing exponentially, with both academia and industry deploying a huge amount of funds and personnel to fabricate, model, and explore the full potential of these devices in electronics applications.
As the ideal memristor is a theoretical model, our chip cannot be built with current technology. However, it can be simulated efficiently by numerically integrating its equations of motion. The entire Dynex Chip can be realized physically as a non-linear dynamical system, which is composed of point attractors that represent the solutions to the problem. It is possible to numerically integrate Dynex machines’ equations of motion, since they are non-quantum systems. The performance of similar machines on a wide variety of combinatorial optimisation problems has already been demonstrated to be orders of magnitude faster than that of traditional algorithmic approaches. Subsequently, by employing topological field theory, it was shown that the physical reason behind this efficiency rests on the dynamical long-range order that develops during the transient dynamics where avalanches (instantons in the field theory language) of different sizes are generated until the system reaches an attractor. The transient phase of the solution search therefore resembles that of several phenomena in Nature, such as earthquakes, solar flares or quenches.
A major problem in evaluating algorithms for NP-complete problems is the need for a systematic generation of hard test instances having previously known properties of the optimal solutions. On the basis of statistical mechanics results, W. Barthel et al. propose random generators of hard and satisfiable instances for the 3-satisfiability problem (3SAT). The design of the hardest problem instances is based on the existence of a first order ferromagnetic phase transition and the glassy nature of excited states. The analytical predictions are corroborated by numerical results obtained from complete as well as stochastic local algorithms in their paper.
Since then, this type of problems (“Barthel Instances”) have become widely used as benchmark instances for evaluating the performance of constraint satisfaction solvers and algorithms. In this article we chose to also use these instances, with growing complexity, to test the DynexSolve performance. In the Dynex GitHub repository, a number of example problem files have been published:
- 1,000 variables: https://github.com/dynexcoin/Dynex-Neuromorphic-Chip/blob/main/cnf/transformed_barthel_n_1000_r_8.000_p0_0.080_instance_001.cnf
- 10,000 variables: https://github.com/dynexcoin/Dynex-Neuromorphic-Chip/blob/main/cnf/transformed_barthel_n_10000_r_8.000_p0_0.080_instance_001.cnf
- 100,000 variables: https://github.com/dynexcoin/Dynex-Neuromorphic-Chip/blob/main/cnf/transformed_barthel_n_100000_r_8.000_p0_0.080_instance_001.cnf
As we are testing job type “SAT” on the Dynex platform, we have to identify state-of-the-art SAT solvers to compare with. The SAT Competition 2023 is a competitive event for solvers of the Boolean Satisfiability (SAT) problem. The competition is organized as a satellite event to the SAT Conference 2023 and continues the series of the annual SAT Competitions and SAT-Races / Challenges. All solvers which participated in the 2022 competition can be downloaded from the organisers’ GitHub here:
Let’s download the file and unzip the content first. According to their results page, the winner of the SAT Competition 2022 has been Kissat_MAB_HyWalk. Lets open a Terminal console and get this solver compiled.
First we need to set the permissions for the directories:
chmod 777 -R Kissat_MAB-HyWalk
Navigate in the directory
Kissat_MAB-HyWalk and run
./configure && make
This will build the solver in the directory
/build . We also need the test instances above, which can be either downloaded or Git cloned.
Let’s start the solver by solving the problem file with 1,000 variables:
Unsurprisingly, the solver returns the result after just 0.20 seconds:
c ---- [ resources ] ---------------------------------------------------------
c maximum-resident-set-size: 4608000 bytes 4 MB
c process-time: 0s 0.20 seconds
c ---- [ shutting down ] -----------------------------------------------------
Mathematically speaking, there would be ²¹⁰⁰⁰ different possible assignments, which is KISSAT solves very efficiently by applying certain heuristics and methodologies for exploring the search space. Making it a bit more complex, we try the file with 10,000 variables:
This takes KISSAT_MAB-HyWalk, the winner of the 2022 SAT Competition, quite a bit longer, we interrupt the computing after 15 minutes (you can keep it running at home longer to see if it eventually solves and how long it would take). Reason for that is the increased complexity, because the first formulation consisted of just 1,000 variables with the task to find assignments so that all 8,000 clauses will be entirely satisfied.
This problem file consists of 10,000 variables, which is 10 x more than before, and therefore has a much higher complexity of 2¹⁰⁰⁰⁰ to satisfy 80,000 clauses.
KISSAT significantly increased its time-to-solution (TTS). Let’s make it even more complex and try the file with 100,000 variables:
The complexity of this problem file has increased to 100,000 variables to find for 800,000 clauses to be satisfied, which allows for 2¹⁰⁰⁰⁰⁰ different potential assignments. It is obvious that KISSAT is reaching its limitations with increasing complexity and exponentially growing TTS. Note that we interrupted our run after one hour, feel free to keep it running yourself longer to evaluate if and how long it would take, or try different solvers.
Efficiency of DynexSolve in Comparison
For the reasons mentioned above, DynexSolve should be capable of solving such hard instances more efficiently than — in our case tested here — the winner of the SAT 2022 Competition. Dynex provides a reference implementation of the algorithm on Github, which we clone:
git clone https://github.com/dynexcoin/Dynex-Neuromorphic-Chip.git
Then navigate into the directory:
And build the reference implementation:
g++ dynex.cc -o dynex -std=c++17 -Ofast -lpthread -fpermissive
Ignore the warnings and make sure that the binary
dynex has been generated.
Let us repeat the tests from above. We start with solving the problem with 1,000 variables:
./dynex -i cnf/transformed_barthel_n_1000_r_8.000_p0_0.080_instance_001.cnf
Similar to the reference solver, Dynex solves this problem after 2.2ms:
c  TIME SPENT: 0.02700s
c  BEST LOC = 0 (GLOBAL 0) **
s  SATISFIABLE
v 1 -2 -3 -4 -5 6 7 -8 -9 10 11 12 13 14 15 -16 17 18 -19 -20
Let’s solve the problem with 10,000 variables:
./dynex -i cnf/transformed_barthel_n_10000_r_8.000_p0_0.080_instance_001.cnf
In contrast to KISSAT, Dynex require a little less than 1 second (0.942s) to solve the problem:
c  TIME SPENT: 0.94200s
c  BEST LOC = 3075 (GLOBAL 0)
s  SATISFIABLE
v -1 2 -3 4 5 -6 -7 -8 -9 -10 11 12 13 -14 15 -16 17 18 -19 -20
Of course we want to know how long it takes Dynex to solve the problem file with 100,000 variables:
./dynex -i cnf/transformed_barthel_n_100000_r_8.000_p0_0.080_instance_001.cnf
Given the large amount of variables (100,000) and clauses (800,000) of the problem to be computed, the simulation for each ODE integration step increases as well. However, Dynex manages to solve the problem with a complexity of 2¹⁰⁰⁰⁰⁰ in less than 1 minute (48.65s).
c  TIME SPENT: 48.65700s
c  BEST LOC = 33242 (GLOBAL 0)
s  SATISFIABLE
This is a remarkable result, especially given the fact that the winner of the 2022 SAT competition, is not able to get to a solution within a reasonable time frame. The approach to convert a SAT problem into a memristor based chip design and then simulating the circuit until it evolves towards the lowest energy flow seems to be an efficient computing process.
Some Important notes:
- The Dynex Chip is not a physical produced chip, DynexSolve simulates them by integrating its equations of motion (ODE integration)
- Extrapolating the equations of motions from the chip design requires a number of variables / parameters to be introduced (as explained in the DynexSolve paper)
- In the case of SAT problems, there are in total 6 parameters to be defined. These parameters depend on the underlying structure for a problem type. The reference implementation used in our example is applying tuned parameters for the problem type stemming from Barthel instances and may not be suitable / applicable out of the box for other problem types
- Identifying the optimal parameters is an iterative tuning process which happens on the Dynex platform where a large number of GPUs are tuning in until they have identified the best values
- With parameters tuned, TTS is optimised and applicable for every similar problem types, meaning problems of similar type can be solved even more efficient on the platform.
We hope you enjoyed the article. More information about Dynex can be found on the official channels:
Update: We have been asked by our community to provide benchmarks with an additional solver and also with a few more problem instances (of similar problem type). We have therefore uploaded additional problem instances to our Git, check it out. Here are the results:
Comparison with Kissat_MAB-HyWalk, winner of the SAT Competition 2023 [http://www.satcompetition.org/]. This is a state-of-the-art conflict-driven clause learning (CDCL) solver based on the Davis–Putnam–Logemann–Loveland (DPLL) algorithm combined with local-search (random walk) for setting initial phases.
Comparison with YalSat 1.0.1, winner of the SAT Competition 2017 Random Track [https://github.com/arminbiere/yalsat]. This is a local-search solver using a variation of WalkSAT, a random-walk algorithm. This category of solvers is sometimes able to have a “lucky shot” at solving problem instances due to its random walk nature.
Also here it becomes clear that local-search does not provide a robust and consistent way of solving Barthel instances.
Comparison with PalSat [https://github.com/arminbiere/yalsat], running on 3 CPU cores (command line option -t 3) and Dynex on 3 CPU cores (command line option -w 3)
PalSat is a very efficient parallel implementation and has been designed to achieve high computational speed. It takes PalSat 16–28 million flips (=computing operations) for finding solutions. In comparison, the Dynex reference implementation, which has not been optimised for speed but to showcase the calculations in a clear and understandable way in the source code, requires only between 15–183 integration steps.