The resulting substance is, on an atomic level, "two intertwined, interlinked crystal lattices -- that in itself is a strange thing," he said. The study, which is published this week in journal PNAS , found that under these conditions potassium atoms entered what is known as a chain-melted state, where one of the two lattices -- the "guest" part -- dissolved into liquid, while the "host" part remained solid. Scientists learn how to reverse time -- in theory. The resulting substance -- if it could be observed by humans -- would look like a solid block of potassium which is simultaneously leaking liquid and reforming as a solid.
Hermann describes it as "a sponge material that can soak up a liquid -- the sponge is solid, but in this case they are the same material. The sponge that soaks itself up. Then it leaks out it reforms the sponge around itself. Artificial intelligence models were developed to extrapolate what could be seen at the atomic level and apply these findings to much larger theoretical samples.
Hermann says that the machine-learning modeling has implications for how other elements behave under similarly extreme conditions. The state vectors corresponding to the training input instances of each data set were then used to train a single fully-connected classification layer using the stochastic gradient descent algorithm Robbins and Monro, ; Mei et al. The accuracy of the trained network was calculated on the testing data sets. Digit sub-vocabulary of the TI46 speech corpus Liberman et al.
There are 1, instances in the training data set and 2, instances in the testing data set. There are 4, and 6, instances in the training and testing data sets, respectively.
- liquid | Chemistry, Properties, & Facts | yvoropijajif.gq!
- Women of consequence: the Colorado Womens Hall of Fame.
- Classic Houses of Seattle: High Style to Vernacular, 1870-1950 (The Classic Houses Series).
- New state of matter: Elements can be solid and liquid at same time -- ScienceDaily.
- Transactions on Large-Scale Data- and Knowledge-Centered Systems XV: Selected Papers from ADBIS 2013 Satellite Events.
For the spatial data sets images , we used the handwritten digits from the MNIST Deng, data set containing 60, images of digits 0 through 9 in the training set and 10, images in the testing set. In addition, we also created an extended MNIST data set that contains all the images from the original MNIST data set, and the same set of images transformed by rotation, shifting, and noise injection.
It has , images in the training data set and 40, images in the testing data set. The first step is converting the images or the analog speech signals to spike trains to be applied as inputs to the liquid. For spatial data images , there are p number of input spike trains fed in to the liquid, with p being the number of pixels in an image. The mean firing rate of each spike train is modulated depending upon the corresponding pixel intensity.
Each input image pixel i th pixel is mapped to a Poisson distributed spike train with the mean firing rate r i for the i th image pixel proportional to the corresponding pixel intensity I i that is specified by. The model was used to convert each audio sample to temporal variation in the intensity of 39 frequency channels.
The time step in this work is 0. In this section, we will explore the effects of dividing a large liquid, by means of standard measures for SP and AP explained in section 2. We involve the same general tasks suggested in Maass et al. In order to measure the pointwise separation property, we generated input spike trains u t and v t with different distances d u , v i n between them. The distance between two input spike trains is evaluated according to the methodology explained in Maass et al.
The average SP pw was evaluated over 10 different weight initialization trials and the results are shown in Figure 4. As the figure illustrates, the SP pw improves with the distance d u , v i n between two inputs, and also with the number of liquids in the LSM. Figure 4. The effect of dividing a large liquid on SP pw , at different distances d u , v i n between inputs. Two input spike trains u and v are illustrated at A d u , v i n and B d u , v i n. C The variation of pairwise separation with the distances between inputs, at different number of liquids D The variation of pairwise separation with the number of liquids, at different input distances d u , v i n.
The resultant states x u i t 0 were used to create the matrix M s explained in Equation 4. As the figure illustrates, the SP lin increases with the number of liquids. However, the rate of increment of SP lin reduces with the increasing number of liquids. Figure 5. The average rank of state matrix M s that indicates the inter-class separability in red and the average rank of the matrix M a which is an indication of the intra-class generalization capability in blue.
For the generalization property, we conducted same above experiment with a different state matrix M a. To create this matrix, we involved jittered versions of the input signal u i t , u i j t as explained in 2. The average rank of the matrix M a is shown in Figure 4. A lower rank of M a r a suggests better approximation of intra-class input examples. According to the figure, r a increases with the number of liquids. This signals the liquid losing its ability to generalize intra-class inputs. We observed that the SP pw improves by 3.
Alan Walker – brand ambassador Liquid State
The SP lin improvement is 1. The reported improvement in SP pw is 1. It is noteworthy that when training using structural plasticity, the inter-class separation capability can be improved, with respect to a traditional liquid with random connections. Without involving such complex learning techniques, one can obtain improved separation by simply dividing a liquid as shown in our work.
However, note that such reservoir connection learning methods can simultaneously preserve the ability of the LSM to approximate intra-class examples, which is not attainable by the ensemble approach, at higher number of liquids. As explained in section 2. Using the experimental setup explained in the previous section 2. We refined the percentage connectivity for each task as shown in Table 1. The classifier was trained using the liquid states corresponding to the training examples, and the classification accuracy of the trained network was obtained for unseen instances from the test data set.
For each application, we then created an ensemble of liquids with N t o t N e n s number of neurons in each small liquid. Figure 6 shows that the DR increases up to a certain number of liquids in the ensemble and then saturates for the four different applications we have considered. This signals that the ensemble of liquids, in principle, gives a better representation of the input with increasing number of liquids until a certain point. Figure 7 shows that the accuracy indeed improves with the number of liquids until a certain point.
We noticed that the N ens, opt is a function of the application, and that increasing N ens beyond N ens, opt actually results in accuracy loss. When comparing Figures 4 , 7 , it is evident that the point at which the DR saturates is the same as N ens, opt. This explains that dividing a large liquid into multiple smaller liquids enhances the class discrimination capability of the liquid, leading to improved classification accuracy. However, note that after the N ens, opt point, the DR saturates whereas the accuracy degrades.
However, it could still be utilized as a measure of identifying the point at which the accuracy starts to drop N ens, opt. This is the same point at which the DR stops improving. Figure 6. The total number of neurons in each ensemble of liquids were kept the same. Note that all the DR trends increase with the number of liquids, and saturates after a certain point, that depends on the application.
Figure 7. Note that all the accuracy trends peak at a certain point, that depends on the application. Figure 8 shows that SP improves continuously with the number of liquids. Improved separation suggests larger dispersion among the centroids of the liquid states corresponding to instances from different classes, which renders the input representations provided by the liquid easier to classify. This is illustrated in the cartoon in Figure 9A for a set of two-dimensional data points from two classes, wherein higher SP while maintaining the same AP results in enhanced class discrimination capability.
At the same time, Figure 8 indicates that AP also increases with the number of liquids, implying that larger number of liquids leads to higher dispersion between projected inputs from the same class. Higher AP for a given SP is not desirable since it could potentially lead to overlap among instances belonging to different classes as depicted in Figure 9B , thereby degrading the class discrimination capability.
Since both SP and AP increases, the ratio DR gives a better measure about the overall effect of the proposed ensemble approach on the classification accuracy of the LSM rather than the individual components per se. As shown in Figure 8 , the DR increases until a certain number of liquids, signaling the dominance of the improvement in SP over the degradation in AP as graphically illustrated in Figure 9C. In contrast, as the number of liquids is increased beyond N ens, opt , DR saturates since the increment in SP is no longer sufficient to compensate for the degradation in AP as shown in Figure 8.
When the dispersion between classes due to increment in SP is not sufficient to compensate for the dispersion occurring for instances within the same class due to AP degradation , there can be overlaps among class boundaries as depicted in Figure 9D , leading to accuracy loss as experimentally validated in Figure 7 across different applications. Figure 8. The trends of different measures associated with LSMs, with the increasing number of liquids. Both AP and SP continuously increases with the number of liquids.
Note that the increment in AP is more significant than that of SP for larger number of liquids. Figure 9.
Original Research ARTICLE
A cartoon that shows the distribution of two dimensional data points that belong to two classes under different conditions. Note that the class boundaries can get overlapped leading to classification errors. Hence, increased AP is not desirable. C,D shows two scenarios where both SP and AP increased from the baseline distribution of data points figure in the middle. D The improvement in SP is not sufficient to compensate for the AP degradation, leading to overlapped class boundaries.
In order to graphically view the variation in SP and AP with the number of liquids for the applications considered in this work, we used Principal Component Analysis PCA to plot the high-dimensional liquid states in a low-dimensional space. Generally, the first few principal components preserves the most variance in a given high-dimensional data set. Hence, the same object in multi-dimensional space can be visualized in low-dimensional space with insignificant changes. To create such a low-dimensional projection of the liquid state vectors for different input patterns, we reduced their dimension using PCA and plotted the two most significant Principal Components PCs corresponding to the two largest eigenvalues.
Figure 10 plots the dimensional liquid state vectors, projected to the two-dimensional space using the first two PCs, for 1, randomly picked input patterns from three classes in the MNIST data set. Figure 10 clearly illustrates why the accuracy improves till the N ens, opt point and degrades beyond that as explained below. The single liquid case shows concentrated low AP , but overlapped data low SP. This is where the AP is the lowest due to the concentrated data points. As the number of liquids increases, the classes become clearly separated. Note that the points belonging to the same class also moves away from their respective centroids due to the increased AP.
This ultimately results in the aforementioned overlapping between the classes for number of liquids larger than N ens, opt , which gives rise to more misclassifications.
Liquid State Brewing Company, Ithaca, New York, United States | RateBeer
Figure The distribution of the liquid state vectors, as a projection to the first two principal components PC1 and PC2, for different number of liquids. The liquid state vectors represented as dots correspond to three classes in the MNIST image data set. Each class has randomly picked 1, liquid state vectors. Distributions related to point and show less overlapping between classes, and the data points are more concentrated at the class mean points in contrast to , which has significant overlapping that caused the accuracy degradation.
The ensemble of liquids approach creates smaller liquids where the dynamics of one network does not affect another. When evaluating the spike propagation within the liquids, these smaller liquids can be run independently and in parallel. Since the evaluation time is a higher order polynomial function of the number of neurons, computing few smaller liquids in parallel instead of computing one large liquid is beneficial in terms of reducing the inference time.
Note that the evaluation of a large liquid can also be parallelized. The liquid dynamics vary temporally, and for digital simulations, it can be divided in to multiple time steps. Each evaluated neuron state in the liquid at one time step is temporally correlated to that of the next time step.
Therefore, the liquid evaluation process cannot be temporally divided for parallelizing the operation. Furthermore, since all the neurons are connected to each other with a given sparsity , the dynamics of one neuron is dependent upon that of other neurons connected to it. However, the matrix-vector manipulations involved in each time step can be parallelized. Simply put, in finding the pre-synaptic currents of the neurons, the matrix-vector multiplication between the spiking activity and the weight matrix must be evaluated as shown below with respect to excitatory neurons for example.
Consider dividing the above process in to multiple processing cores. The division of the operation in to two cores using row-wise striped matrix decomposition requires the matrix W to be divided in to two parts Figure 11A. For the next time step, these S 1 and S 2 must be concatenated and requires communication between cores. In contrast, a concatenation is not required until the end of the total simulation duration T in our ensemble approach Figure 11B.
Due to the lack of communication overhead between processors, the ensemble approach is faster than a parallelized version of the single liquid baseline among N ens number of processors. In fact, due to the aforementioned communication overheads, efficient parallel processing can be hindered even in Graphical processing units GPUs Kasap and van Opstal, However, in any method of evaluating the liquid dynamics, note that the ensemble approach has less number of connections than a single liquid baseline.
Therefore, the ensemble approach has reduced amount of computation leading to lower evaluation time. Different studies have shown designing hardware accelerators for spiking neural network platforms Wang et al. A The division of matrix-vector multiplication using row-wise striped matrix decomposition, for the single liquid baseline LSM. Note that during each time step, the generated S 1 and S 2 vectors need to be concatenated to form the S vector represents the spiking activity of the liquid , which requires communication between cores.
The inference time is the addition of the liquid evaluation time and the classifier evaluation time. The liquid evaluation time was calculated by giving input instances to the LSM model solver and estimating the average liquid computation time per input. Note that the classifier training time is similar in the baseline single liquid LSM and the ensemble approach, since there are equal number of neurons in the liquid and the number of trained weights are the same.
Once an LSM is trained, the connections within the liquid and the classifier weights must be stored. LSMs with large liquids require more space. In the ensemble approach, the number of connections within the liquid are significantly lower than the single liquid baseline. For example, assume dividing a liquid with N tot number of neurons in to N ens number of smaller liquids with N t o t N e n s amount of neurons in each of them.
This shows that the number of connections reduces by a factor of N ens when dividing a large liquid into N ens smaller liquids, given that the percentage connectivity stays the same. Figures 12A,B illustrate how the memory requirement varies for different number of liquids for the MNIST image recognition and TI-alpha speech recognition applications, respectively. The simple structure and training of LSMs, come with an accuracy trade-off, when compared with other non-reservoir computing techniques such as LSTM networks Bellec et al.
Different mechanisms have been studied in the literature such as training the connections in the reservoir Xue et al. However, these methods will increase the complexity of the LSM resulting in poor performance with respect to latency, despite the higher accuracy. Furthermore, a liquid can be considered as a universal computational medium. A single liquid with multiple trained readouts can be used for multiple applications Wang et al. Above methods such as training the connections within the liquid will make the LSM restricted to one application.
In this section, we will explain two basic methods of improving accuracy, while leaving the structural and training simplicity of LSMs intact, and compare the results with the ensemble approach. As explained in Maass et al. Increased number of neurons in the liquid will result in increased number of variables for the classifier. Therefore, increasing the number of neurons will improve the prediction accuracy of the LSM. Figure 13A shows how the accuracy of an LSM varies with the number of neurons in the reservoir for the TI-alpha speech recognition task.
As Figure 13A illustrates, the accuracy initially increases with the number of neurons and then saturates after a certain point. Increased number of neurons implies increased connections within the liquid, given that the percentage connectivity stays the same. Due to this, evaluation time of the liquid increases exponentially as shown in Figure 13B. Therefore, when the number of neurons are already high, the accuracy improvement we obtain by further increasing the number of neurons is not worth the resultant performance and storage requirement penalty.
Note that we have also adjusted the percentage connectivity at each point in the graph, to get the best accuracy for a given number of neurons. A The accuracy of an LSM with a single liquid, measured at different number of neurons, for a speech recognition application TI-alpha. B The average liquid evaluation time of an LSM measured at different number of neurons. The percentage connectivity within the LSM is an important measure of the spiking activity of a liquid. The spiking activity of the liquid could show two negative behaviors which could drastically reduce the accuracy of the network, viz.
Pathological synchrony occurs when the neurons get caught in infinite positive feedback loops resulting in heavy continuous spiking activity. Over-stratification can be defined as the opposite extreme of the above. Here, the neurons do not propagate an input signal properly, resulting in reduced spiking activity. Both the above behaviors result in similar outcomes for input instances of different classes hence poor separation between classes , making classification tasks hard. Illustration of two negative behaviors of an LSM at different input to liquid percentage connectivity values.
Each raster plot shows the spiking activity of the liquid neurons over time. The application is a speech recognition task TI-alpha. A Over-stratification at low percentage connectivity. B Pathological synchrony at higher percentage connectivity. C An instance that shows clear differences between spiking activity of the liquid neurons in contrast to A,B.
After that, we refined the input-liquid connectivity for further accuracy improvement. Figure 15A shows how the accuracy changes with the percentage connectivity of the input to liquid connections. Liquids with different number of neurons have different optimum connectivity values as shown in Figure 15B. The application is recognizing spoken letters in TI-alpha speech corpus.
A The accuracy trend with varying input—liquid percentage connectivity, for different number of liquid neurons. B The percentage connectivity that gives the best accuracy at different number of neurons. The summarized results are illustrated in the 3D plot in Figure The color code of the figure gives the accuracy of a particular combination of connectivity values. Sparser connectivity gives better results. As the figure illustrates, at sparser connectivity values, a single liquid LSM offers lower accuracy than an LSM with N ens liquids refer to the upper left corner of the 3D plots.
As Figure 17A illustrates, maximum accuracy obtained from the single liquid configuration is smaller than that of other configurations. We also plotted the average accuracy of a given LSM configuration across all percentage connectivity values Figure 17B. The average accuracy to some extent could be thought of as the outcome one would witness in a given LSM configuration for an arbitrarily selected connectivity value within the specified sparse connectivity region of the experiment. The average accuracy of the single liquid LSM configuration is lower than that of multiple liquids.
In section 3. In order to view if a single liquid with sparser connectivity offers better accuracy than an LSM with N ens number of liquids and higher percentage connectivity, we conducted an experiment. In other words, the goal of the experiment is to view the accuracy of two LSM configurations with same number of connections.
The dominant component of the number of connections in an LSM is the connections between the excitatory neurons. Hence we conclude that at higher degrees of sparsity, the ensemble approach performs better than a single liquid baseline with the same number of connections. Note that the number of connections in both the cases considered are the same. The accuracy was evaluated on the TIalpha classification task. Apart from the percentage connectivity, different connectivity patterns within the liquid were also considered in literature.
For example, a probabilistic local connectivity within the liquid, inspired by the connectivity in biological neurons is suggested in Maass et al. We conducted an experiment with different sets of parameters refer to the Supplementary Material for the probabilistic local connectivity model. More information on our analysis is included in the Supplementary Material.
In this section, we analyze whether dividing a liquid with any number of neurons N tot would result in similar accuracy improvements. To this effect, we created ensembles of liquids with different total number of neurons N tot. As Figure 19 illustrates, liquids with large number of neurons show clear sign of accuracy improvement when divided into smaller liquids. However, when the number of neurons is smaller, dividing the liquid may result in decreased accuracy.
For example, note that the accuracy reduces continuously when a liquid with neurons is divided. This result is similar to the observation in Srinivasan et al. Similarly, here the ensemble approach makes sense only for LSMs with large number of neurons in them. In conclusion, we state the following with respect to the applicability of the ensemble approach for LSMs. In order to improve the accuracy of an LSM, the number of neurons can be increased. However, beyond a certain point, accuracy does not improve further.
In such a case, the ensemble approach can be utilized to further increase the accuracy. Such accuracy improvements are not attainable by means of other simple methods that preserve the structural and training simplicity of the standard LSM, such as changing the connectivity. The accuracy varying with the number of liquids in the ensemble approach, for different total number of neurons N tot.
When moving from the single liquid approach to the ensemble of liquids approach, any benefit in terms of classifier training time was not observed. This is due to the fact that the number of total liquid neurons is the same, and we are using a single classifier. In this section, we analyze, if including a readout at the end of each small liquid is beneficial than having a single readout for all the liquids. The structure of the multiple liquid-multiple readout MLMR approach.
There are multiple small liquids, and individual classifiers at the end of each liquid. Final outcome global output is calculated by considering the maximum vote among all the local classified outputs from the s-LSMs local outputs. Each s-LSM is trained individually, and the final classification is done by considering either the maximum outcome, or the majority vote among all the local classifiers. During training, we do not use all the training data points for each local s-LSM classifier. Instead, we divide the training space among the ensemble of s-LSMs based on the following two criteria:.
Random training space division RD. Clustered training space division CD. In random training space division RD method, we randomly divide the training data space among the ensemble of s-LSMs, and feed them to obtain the corresponding liquid state vectors at the output of each liquid. These state vectors were then used to train the local classifiers attached to each s-LSM in the ensemble using gradient descent error backpropagation.
For example, if there are N ens number of s-LSMs and N train number of examples in the training set, each s-LSM will be trained with N t r a i n N e n s number of randomly picked training examples. Here, an s-LSM has specific knowledge about the cluster of examples that it is trained with, and zero knowledge about other clusters. Therefore, an s-LSM may not correctly identify an input that belongs to a different cluster, apart from what it was trained with, leading to large accuracy degradation at the global classifier.
Instead, it could be some different neuron from an s-LSM that was not trained with rotated images. Initially we divide the training space into clusters along with their standard target vectors vectors of which the length is equal to the number of classes L. This will force each s-LSM outcome to be low, when the presented input does not belong to the cluster with which the s-LSM was trained. This method is explained graphically in Figure 21 , by means of an example. C The clustered training space division method.
The target vectors of the foreign data are modified to have each value equal to 0. We used the handwritten digit recognition application with the extended MNIST data set, to check the accuracy, performance, and training time of the aforementioned methods. The training data set was divided into 4 clusters; original MNIST images, noisy images, rotated images and shifted images. Total number of neurons were 1, and each s-LSM has neurons. The connectivity is set as indicated in Table 1. Table 3 reports the accuracy of the above explained two training space division methods RD and CD along with the accuracy of the baseline single liquid with 1, neurons.
When comparing with RD method, CD gives better accuracy for the same number of neurons. The reason for this can be explained as follows. The figure elaborates the t-Distributed Stochastic Neighbor Embedding t-SNE Maaten and Hinton, of the high dimensional images that belong to the aforementioned three clusters, for better visualization in the lower dimensional space 2D. Due to this separate distributions, examples that belong to the same class but in different clusters may not spatially stay together in the higher dimensional space.
The N ex, i number of examples a classifier in RD method sees belongs to N ens number of clusters and they are distributed all over as shown in Figure 22B. According to the figure, the two classes are not linearly separable. Therefore, the RD method leads to more misclassifications as elaborated in Figure 22A. Furthermore, it recognizes the data points that belong to foreign clusters due to the proposed inhibition criterion. The dashed lines show the classifier decision boundaries. We selected a percentage, that will pick approximately equal number of data points from each of the aforementioned 13 sets.
The memory requirement, inference time, and training time were calculated for two scenarios. First, N tot in the baseline was selected such that both the baseline and the RD method have the same accuracy Second, N tot in the baseline was selected such that it matches the accuracy of the CD method In each of the above scenarios, the obtained memory requirement, inference time, and training time values were normalized with respect to the baseline. These normalized values for the two cases are shown in a single graph in Figure The CD method is better in terms of memory requirement and inference time, in comparison to the single liquid baseline and RD method.
We calculated the total number of MAC multiply and accumulate operations during training to estimate the training time it is a function of the number of neurons in a liquid, number of output neurons, and number of training examples. Lowest training time was achieved in the RD method. For a 1, total number of neurons, the 4 ensemble case with a single classifier studied in section 3. Normalized total memory requirement, inference time, and training time of the clustered training space division method CD , random training space division method RD , and the single liquid baseline.
The results are under iso-accuracy conditions. We have presented an ensemble approach for Liquid State Machines LSMs that enhances separation and approximation properties, leading to accuracy improvements. The separation property in LSMs measures the dispersion between projected liquid states from different classes, whereas the approximation property indicates the concentration of the liquid states that belong to the same class.
We witnessed that the DR increases when a large liquid is divided into multiple smaller independent liquids across four speech and image recognition tasks. We observed the existence of an optimal number of liquids N ens, opt until which the DR increases and saturates thereafter. Owing to the improvement in the DR in our proposed ensemble approach, we noticed an LSM accuracy enhancement with increasing number of liquids. The accuracy peaked at the same N ens, opt point at which each DR saturated, for different recognition tasks. This validated the existence of an optimal number of liquids which gives the best accuracy for the LSM, and this point is highly dependent upon the application and the total number of liquid neurons.
There is plethora of complex approaches that concentrate on improving the accuracy of LSMs, including learning the liquid connections Wang and Li, ; Xue et al. In contrast to such works, our proposed approach does not change the simple structure and training methods of LSMs. Furthermore, the ensemble approach gives better accuracy when compared with other simple mechanisms of improving the LSM accuracy such as increasing the number of neurons, changing the percentage connectivity, and utilizing the probabilistic local connectivity models. Apart from providing improved accuracy, the proposed ensemble approach comes with other benefits including lower memory requirement and lower inference time.
We have shown that creating an ensemble of liquids leads to lower inter-connections in comparison to a single liquid with the same number of neurons. Furthermore, the liquid evaluation can potentially be parallelized in the ensemble approach due to the existence of small independent liquids. This results in reduced LSM inference time.
The accuracy improvement with increasing number of liquids in the ensemble becomes less evident when the total number of neurons is small.
In fact, creating an ensemble of liquids with a small number of neurons will rather reduce the accuracy. Hence the ensemble approach makes sense for LSMs with large number of neurons Srinivasan et al. Since there is no benefit in terms of training time between a single-liquid LSM and the proposed ensemble approach MLSR , we investigated the MLMR approach where a classifier is added to each small liquid in the ensemble.
By dividing the training example space to train each small LSM, we were able to attain significant benefits in terms of training time, when compared with MLSR approach. There are multiple classifiers that were trained independently in the MLMR approach, and the final output is the maximum vote of all the local classifiers. Despite the performance benefits during training, we noticed an accuracy degradation in the MLMR approach, when compared with both the MLSR approach and the single-liquid baseline LSM with equal number of liquid neurons.
A large classifier as in the baseline and MLSR approach has more number of parameters and is capable of fitting in to an unknown function better than a small classifier Krzywinski and Altman, , leading to improved accuracy. The datasets generated for this study are available on request to the corresponding author. PW performed the simulations. All the authors contributed in developing the concepts, generating experiments, and writing the manuscript. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The authors would like to thank C.