(C) PLOS One This story was originally published by PLOS One and is unaltered. . . . . . . . . . . Brain connectivity meets reservoir computing [1] ['Fabrizio Damicelli', 'Institute Of Computational Neuroscience', 'University Medical Center Hamburg Eppendorf', 'Hamburg University', 'Hamburg', 'Claus C. Hilgetag', 'Department Of Health Sciences', 'Boston University', 'Boston', 'Massachusetts'] Date: 2022-12 Abstract The connectivity of Artificial Neural Networks (ANNs) is different from the one observed in Biological Neural Networks (BNNs). Can the wiring of actual brains help improve ANNs architectures? Can we learn from ANNs about what network features support computation in the brain when solving a task? At a meso/macro-scale level of the connectivity, ANNs’ architectures are carefully engineered and such those design decisions have crucial importance in many recent performance improvements. On the other hand, BNNs exhibit complex emergent connectivity patterns at all scales. At the individual level, BNNs connectivity results from brain development and plasticity processes, while at the species level, adaptive reconfigurations during evolution also play a major role shaping connectivity. Ubiquitous features of brain connectivity have been identified in recent years, but their role in the brain’s ability to perform concrete computations remains poorly understood. Computational neuroscience studies reveal the influence of specific brain connectivity features only on abstract dynamical properties, although the implications of real brain networks topologies on machine learning or cognitive tasks have been barely explored. Here we present a cross-species study with a hybrid approach integrating real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks, allowing us to probe the potential computational implications of real brain connectivity patterns on task solving. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. We also present a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. This approach also allows us to show the crucial importance of the diversity of interareal connectivity patterns, stressing the importance of stochastic processes determining neural networks connectivity in general. Author summary Artificial Neural Networks (ANNs) and Biological Neural Networks (BNNs) exhibit different connectivity patterns. ANNs’ have tyically carefully hand-crafted architectures that play an important role in their performance. On the other hand, BNNs’ wiring shows self-organized emergent patterns resulting from processes such as development and neuronal plasticity. Although ubiquitous properties of brain connectivity have beed identified and associated with abstract dynamical properties of the brain, the implications of real brain networks topologies on concrete machine learning tasks have been barely explored. The goal of this hybrid, cross-species study was to give a step in that direction by probing real brain connectomes on concrete machine learning tasks. Our approach integrates real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks. To achieve that, we also present here a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. Out findings stress the importance of stochasticity in neural networks connectivity, especially regarding the heterogeneity of interareal connectivity. Citation: Damicelli F, Hilgetag CC, Goulas A (2022) Brain connectivity meets reservoir computing. PLoS Comput Biol 18(11): e1010639. https://doi.org/10.1371/journal.pcbi.1010639 Editor: Marcus Kaiser, University of Nottingham, UNITED KINGDOM Received: November 23, 2021; Accepted: October 5, 2022; Published: November 16, 2022 Copyright: © 2022 Damicelli et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: All presented results with ESNs training were obtained using the Python package echoes, publicly available here: https://github.com/fabridamicelli/echoes. Also the generation of scaled connectomes was done using the Python package bio2art, publicly available here: https://github.com/AlGoulas/bio2art. Funding: FD has been funded by the Deutscher Akademischer Austausch Dienst (DAAD). AG has been funded by the Deutsche Forschungsgemeinschaft (DFG) (HI 1286/7-1). CCH has been funded by the Deutsche Forschungsgemeinschaft (DFG) (SFB 936/A1, Z3; TRR 169/A2; SPP 2041, HI 1286/6-1) and the Human Brain Project (SGA2, SGA3). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Introduction Recent breakthroughs in Artificial Neural Networks (ANNs) have prompted a renewed interest in the intersection between ANNs and Biological Neural Networks (BNNs). This interest follows two research avenues: improving the performance and explainability of ANNs and understanding how real brains compute [1]. Many recent improvements of ANNs rely on novel network architectures, which play a fundamental role in task performance [2, 3]. In other words, such connectivity patterns allow for better representation of the outer world (i.e., the data) and/or they let the networks learn better, e. g., promoting faster convergence. Also, although ANNs have typically a fixed architecture at a meso/macro level, at the lower level of weights and connections between layers, ANNs can develop complex connectivity pattern as a result of training [4]. Nevertheless, ANNs employ architectures that are not grounded in empirical insights from real brains network topology. For example, ANNs do not follow ubiquitous organization principles of BNNs and, although the measured density of connectomes depends on the experimental spatial resolution at hand, BNNs are in general sparser than ANNs [1, 5]. Given that Biological Neural Networks (BNNs) present complex, non-random connectivity patterns, it is hypothesized that this “built-in” structure could be one key factor supporting their computation capabilities. In consequence, a focus on BNNs’ topology has started to gain traction in recent ANNs research [6, 7]. For instance, building feedforward networks based on graph generative models, such as Watts-Strogatz and Barabási–Albert models, has resulted in competitive performances compared to optimized state-of-the-art architectures [8]. In a complementary vein, feedforward networks may spontaneously form non-random topologies during training, such as modular structure [9]. In addition to that, combining evolutionary algorithms with artificial neural networks has shown that a modular topology can improve performance and avoid forgetting when learning new tasks [10]. In sum, current evidence supports the notion that non-random topologies can lead to desired performance ANNs. However, studies thus far have only focused on network topology models that have almost no direct correspondences (or only abstract ones) to BNNs mapped by experimental connectomics. Hence, it is to date unknown if and to what extent the actual, empirically discerned topology of BNNs can lead to beneficial properties of ANNs, such as more efficient training (fewer epochs and/or samples) or better performance (e.g., higher test accuracy). A complementary view comes from connectomics and network neuroscience, fueled by experimental advances for mapping brain connectivity to an unprecedented level of detail [11]. In that context, a connectome refers to all mapped connections of one individual brain, either coming from one individual or aggregated across sampled brains, depending on the experimental methodology. Graph-theoretical tools are then leveraged to describe brain connectivity and find potential associations. For example, looking for correlations between specific graph properties and cognitive tasks performance [12]. Along those lines, some graph properties typical of real brains can also have advantageous dynamical properties, such as supporting the balance between information segregation and integration [13, 14]. Nevertheless, the relationship (if any at all) between those abstract dynamical properties and the performance of the network on concrete tasks remains unclear. We explicitly address that gap here by building recurrent Echo State Networks (ESN) that are bio-instantiated, thus BioESNs. We ask if and to what extent the topology of BioESNs affects its performance on concrete memory tasks. We build BioESNs that embody the wiring diagram empirically found in brains of three primates species, including humans. We also present a framework, the bio2art [15], to map and scale up real connectomes, allowing to integrate them into recurrent ANNs. This is a necessary step exploring the possible links between biological and artificial neural systems, not by means of abstract network models but exploiting the wealth of empirical data being generated, which has started to paint a detailed picture of the intricate wiring of biological neural networks. Discussion We address two fundamental questions aiming at bridging the gap between artificial and biological neural networks: Can actual brain connectivity guide the design of better ANNs architectures? Can we better understand what network features support the performance of brains in specific tasks by experimenting with ANNs? Concretely, we investigate the potential effect of connectivity built based on real connectomes on the performance of artificial neural networks. To the best of our knowledge, this is the first cross-species study of this kind, comparing results from empirical connectomes of three primate species. The gap that we aim at emerges from two under-explored aspects in artificial and biological neural networks. First, connectivity patterns (i.e., architectures) of ANNs are very different from actual brain connectivity. For example, echo state networks use a sparse, randomly connected reservoir, which is incongruent with the highly non-random connectivity empirically found in the brain [5, 12]. Thus it is not clear, how more realistic architectures would impact the performance such ANNs. Second, computational neuroscience studies have characterized the relation between structural and function connectivity patterns [18, 19] and attempted to relate brain connectivity to behavioural differences [20, 21]. Nevertheless, it remains unclear how those patterns of neural activity translate into brain computational capabilities, i.e., how they support performance of brain networks on concrete tasks. We set out to evaluate real whole brain connectomes on specific tasks, in order to identify a potential role of such wiring patterns, in a similar vein to previous studies on feedforward networks [22]. We found that constraining reservoir connectivity of ESNs with real connectomes led to performances as good as for the random conditions, classically used for ESNs, as long as a certain degree of randomness is allowed. In general, we observe a degeneracy of structure and function, in which different topologies lead to the same performance, so no unique connectivity pattern appears necessary to support optimal performance in this modeling context. Our results were similar across tasks. This is to a certain extent logical considering that both tested tasks are memory tasks, but the consistency also speaks for the robustness of the networks to different recall mechanisms. Importantly, all our results were consistent across the three evaluated species. This supports the generality of our findings, at least for the evaluated tasks. This observation is especially relevant considering that the connectomes were obtained with very different experimental methodologies [11, 23, 24]. Moreover, our experiments with scaled up connectomes showed similar performance scores across species when the reservoir size was matched. Nevertheless, the different experimental methodologies to infer the connectivity prevent us from drawing specific comparative conclusions across connectomes, such as whether the wiring diagram of any of the tested connectomes is intrinsically better suited for the task regardless of the size. Our surrogate networks also showed that, in general terms, the more heterogeneity and randomness allowed in the connectivity, the better performance the BioESNs achieved. Interestingly, that effect was also observable by augmenting the computational capacity of the models by means of larger reservoirs. Using the bio2art framework, we scaled up connectomes with either homogeneous or heterogeneous interareal distributions of connectivity weights and found that only the larger reservoirs with heterogeneous wiring could overcome the lower performance inherent to the underlying connectivity. This points out once again to the importance of random wiring diagrams for ESNs’ performance. The reason for the importance of such randomness is at this point not completely clear to us, but we can conjecture on a role of the modular structure of brain connectivity. A modular reservoir might certainly imply higher correlations between otherwise more independent neurons, which in turn could hurt the representation capacity of the network, since the latent space of input/s projection would have an effectively lower dimensionality. Our results are also in line with a recent study using human connectivity as reservoir of ESNs, which showed that random connectivity indeed achieved globally maximal performances across almost all tested hyperparameters, provided the wiring cost is not considered [25]. It is worth noting that the interpretation of our results is based on the criterion that the overall best performing hyperparameters serves as an objective way to pick a hyperparameter constellation, but future studies could go further down that research line, exploring the potentially different effects of the connectivity according to different dynamical regimes in the reservoir promoted by different hyperparameter constellations. The functional importance of randomness is also consistent with the fact that stochastic processes play a fundamental role in brain connectivity formation, both at a micro and meso/macro-scale, as supported by empirical [26], and computational modeling studies [27, 28]. While here we tested the performance of the ANNs in two memory tasks, our approach is versatile and extendable, since it allows an open ended examination of the consequences of network topology found in nature for artificial systems. Specifically, the following contributions hold: First, we offer an approach for creating ANNS with network topology dictated directly from empirical observations in BNNs. Second, creating and upscaling BioESNs from real connectomes is in itself a highly non-trivial problem and here we offer, although not exhaustively, insights into the consequences of each strategy. Third, our method allows building ANNs with network topologies based on empirical data from diverse biological species (mammalian brain networks). We are aware of a number of limitations of our study as well as interesting research avenues for future work. We evaluated our BioESNs models on two different memory tasks framed as regression problems. Even though these are classical tasks for ESNs [17], we should stress that our results might indeed be different for other kinds of tasks or settings. So future work could, for example, include classification tasks as well as more ecologically realistic ones in order to derive more general conclusions. Also, the tasks used to evaluate network performance did not correspond one-to-one to cognitive tasks carried out by animals of the studied species. Although we understand that as an interesting research avenue when more detailed connectivity data becomes available, we opted for a somehow simpler but rigorous framework, sticking close to classical approaches in the ESNs field in order to avoid further assumptions, such as the connectivity between non yet mapped regions of the brains of the studied species. Connections in the adult brain change constantly as a consequence of stochastic fluctuations and activity-driven plasticity, e.g., learning and memory [29]. In our study, we assumed connectivity within the reservoir to be constant during the tasks. Previous studies have shown some effects of plasticity rules on ESNs [30], so we foresee interesting future work along those lines as well. As we aimed at testing the potential impact of the global wiring diagram of connectomes, we consider the entire connectomes as one unique network to create the reservoirs. This is different from a previous study where the connectivity was divided into subnetworks corresponding to brain systems that were separately trained [25]. We decided to avoid here the strong assumptions that such an approach implies, but we recognize a potential for future studies in the direction, for example, exploring the division of networks as different input/output subsystems. Along a similar vein, we limited our analysis to the global connectivity pattern. Future studies could attempt to systematically disect the potential effect different topological features on performance, an interesting but at the same challenging research avenue given the non-trivial statistical dependencies of graph metrics with each other [31]. Conclusion The wiring of biological and artificial neural networks plays a crucial role in providing networks with fundamental built-in biases that influence the their ability to learn and their performance. Brain connectivity results from emergent complex phenomena involving evolution, ontogenesis and plasticity, while artificial neural networks are deliberately hand-crafted. Our presented work represents a new interface between network neuroscience and artificial neural networks, precisely at the level of connectivity. We contribute an original approach to blend real brain connectivity and artificial networks, paving the way to future hybrid research, a promising exploration path leading to potential better performance and robustness of artificial networks and understanding of brain computation. Materials and methods Echo State Networks (ESN) Echo State Networks (ESNs) are one kind of recurrent neural networks (RNNs) belonging to the broader family of reservoir computing models, typically used to process temporal data [32]. The ESN model consists of an input, a reservoir and an output layer. The input layer feeds the input(s) signal(s) into a recurrent neural network with fixed weights, i.e., the reservoir. The function of the reservoir is to non-linearly map the input signal onto a higher dimensional space by means of the internal states of the reservoir. Formally, the input vector is fed into the reservoir through an input matrix , where N r and N x indicate the number of reservoir and input neurons, respectively. Optionally, the input can be scaled by a factor (input scaling) before been fed into the network. The discrete dynamics of the leaky neurons in the reservoir are represented by the state vector and governed by the following equations: (1) (2) Where is the connectivity matrix between reservoir neurons, is the bias vector, f the nonlinear activation function. For all the presented results f = tanh, the hyperbolic tangent function which bounds the values of r to the interval [−1, 1]. With α = 1, there is no leakage, which we found to perform better so we fixed them for all presented results. Thus, Eqs 1 and 2 can be re-written together as: (3) The output readout vector is obtained as follows: (4) Where g is the output activation function and [.;.] indicates the vertical vector concatenation and is the readout weights matrix. For all results presented g was either rectified linear unit (ReLU) or the identity function. Training the model means finding the weights of W out . Linear regression was used to solve W out = Z+ Y, where Z+ is the pseudoinverse of Z = [x(t); r(t)], i.e., the vertically concatenated inputs and reservoir states for all time steps. We initialized the incoming weights in W in with random uniformly distributed values between [−1, 1]. Further considerations about weights initialization as well as sparsity of the reservoir are detailed in the section Mapping and upscaling connectomes with bio2art. The activity of the reservoir neurons is initialized with r(t) = 0. That produces an initial transient of spurious activity which is unrelated to the inputs and is therefore useless for learning the relationship to the outputs. We discarded that initial transient of 100 time steps in all cases, both for training and for testing. All presented results with ESNs training were obtained using the Python package echoes, publicly available [33]. ESN hyperparameters tuning The typically most influential hyperparameters in ESNs are reservoir size N r , spectral radius of the reservoir ρ, input scaling factor ϵ and the leakage rate α [16]. In our scheme, the reservoir connectivity W is determined by the real connectome, thus determining a fixed N r . So the hyperparameters explored were: spectral radius of the reservoir connectivity matrix ρ = {0.91, 0.93, …, 0.99}, input scaling ϵ = {10−9, 10−8, …, 100}, leakage rate α = {0.6, 0.8, 1} and bias b = {0, 1}. A train/validation/test split of the data was performed. For each hyperparameters constellation, the model was trained on the training set and based on the validation score we chose and for the sake of comparison between different conditions, we fixed a common, not necessarily optimal but generally well-performing, set of hyper-parameters: Spectral radius ρ = 0.99, Input scaling ϵ = 10−5, Leakage rate α = 1, Bias b = 1. Sticking to the overall best performing hyperparameters serves thus as an objective criterion upon which we decide wich hyperparameter constellation to fix in order to draw the conclusions of our results. Since the output values Memory Sequence Task are bounded to be greater than 0, we used ReLU as activation out function. Given that such boundary does not exist for the outputs of the Memory Capacity Task, we simply used the Identity as activation out function. The data for train/validation was split as follows: Sequence Recall Task: 800 trials for training and 200 for test for each hyperparameters/task difficulty/reservoir generation constellation. Memory Capacity Task: 4000 time steps for training and 1000 for test for each hyperparameters/task difficulty/reservoir generation constellation (see Supporting information). For each constellation, we tested 10 independent runs with newly instantiated networks. After fixing the best hyperparameters, newly instantiated networks were generated and evaluated on the test set not yet seen by any model. The presented results in the main text are the test performances. Mapping and upscaling connectomes with bio2art We refer to a connectome as the map of all the connections obtained from a single brain [34]. We used the following publicly available datasets: Macaque monkey [23], Marmoset monkey [24] and Human [11]. For the sake of clarity, let us disect the connectivity into two components: topology and weights. The topology refers to the wiring diagram (i.e., who connects to whom), regardless of the strength of the connection (assuming non-binary connectivity). So if we think of the connectivity in terms of its representations as a connectivity matrix, the topology refers here to the binary mask that indicates which positions of the matrix have values different from zero. The weights describe the precise strength of those connections between neurons. This differentiation is not necessarily completely consistent with common uses in the literature, but serves the purpose of explaining the work presented here. As our goal is to evaluate the role of the topology of real brains, i.e., the mentioned wiring diagram, we propose a scheme to map real connectomes onto reservoirs of ESNs, with topology corresponding to real brains, but weights drawn from a uniform distribution of values between [-1, 1], as in classical ESN approaches [32]. Classical ESNs reservoir have weights randomly from a symmetric probability distribution, typically Uniform or Gaussian, and place them at random between neurons, thus generating random graph from the perspective of the topology as well. Another common practice is to use a relatively sparse network, e.g., common choices are pairwise probability of connection p < 0.1 or a low fixed mean degree, e.g., k = 10). So for the sake of comparison and testing the effect of topology in the performance of ESNs, we the following surrogate connectivity variations as null models (see Fig 2 for a visual comparison): Bio (rank): Preserves the empirical topology, i.e., wiring diagram or “who connects to whom”. Weights are placed such that the rank order of them is the same as the empirical, i.e., strong weights in the empirical connectome will correspong to higher positive weights in the Bio (rank) condition, and viceversa. Preserves the empirical topology, i.e., wiring diagram or “who connects to whom”. Weights are placed such that the rank order of them is the same as the empirical, i.e., strong weights in the empirical connectome will correspong to higher positive weights in the Bio (rank) condition, and viceversa. Bio (no-rank): Preserves the empirical topology, i.e., wiring diagram or “who connects to whom”. Weights are placed randomly, so no rank order is preserved. Preserves the empirical topology, i.e., wiring diagram or “who connects to whom”. Weights are placed randomly, so no rank order is preserved. Random (density): Wiring diagram is completely random, but allowing only as many connections as to match the density of connections of the empirical connectome. The density is defined as the fraction of present connections out of all the possible ones. Weights are placed randomly. Wiring diagram is completely random, but allowing only as many connections as to match the density of connections of the empirical connectome. The density is defined as the fraction of present connections out of all the possible ones. Weights are placed randomly. Random (k): Wiring diagram is completely random, but allowing only a fixed number of connections k per neuron. All presented experiments use k = 10. Weights are placed randomly. Wiring diagram is completely random, but allowing only a fixed number of connections k per neuron. All presented experiments use k = 10. Weights are placed randomly. Random (full): Wiring diagram is completely random and all neurons connect to all other neurons, i.e., the density of connections is 1. Weights are placed randomly. The bio2art functionality builds artifical recurrent neural networks by using the topology dictated by empirical neural networks and by extrapolating from the empirical data to scale up the artifical neural networks. We explored here a range of network size scaling factors between 1 and 30x by step of 1. bio2art offers the possibility to control the within and between area connectivity as well. There are currently no empirical comprehensive data for neuron-to-neuron connectivity within each brain region. However, existing empirical data suggest that within-region connectivity strength constitutes approximately 80% of the extrinsic between-region connectivity strength [23]. Therefore, the intrinsic, within-region connectivity in our work followed this rule. It should be noted that the number of connections that a neuron can form within neurons of the same region is controlled by a parameter dictating the percentage of connections that a neuron will form, out of the total number of connections that can be formed. Here we set this parameter to 1, that is, all connections between neurons within a region are formed. The exact details of the implementation can be found here [15], together with a freely available Python toolbox to apply the tools used here. Tasks Memory Capacity (MC) Task. In this memory paradigm, a random input sequence of numbers X(t) is presented to the network through an input neuron. The network is supposed to independently learn delayed versions of the input, thus there are several outputs [17]. Each output Y τ predicts a delayed version of the input X(t) by τ time steps, i.e., Y τ (t) = X(t−τ). The values of the input signal X were randomly drawn from a uniform distribution, i.e., X(t) ∼ Uniform(−0.5, 0.5). The networks were trained with 4000 time steps and tested on the subsequent 1000. Each output is trained independently and the performance, the so called Memory Capacity (MC), is calculated as the cumulative score (squared Pearson correlation coefficient ρ) across all outputs (i.e., all time lags) as follows: (5) Sequence Recall Task. In this task the network is presented with two inputs, X 1 (t), X 2 (t), a sequence of random numbers to memorize and a cue input, respectively. The cue input signals whether to fixate (output equal to zero) or recall. After the recall signal, the network is supposed to output the memorized sequence in the L steps previous to recall signal, where the pattern length L is a parameter regulating the task difficulty (see Fig 6). One trial of the task consists of one fixation period and the subsequent recall period. The values of the input signal X 1 (t) were randomly drawn from a uniform distribution, i.e., X 1 (t) ∼ Uniform(0, 1). The performance was evaluated with the R2 score only during the recall steps because the fixation phase was much easier for the model to get right and would have inflated the performance. Each BioESN was trained with 800 trials and tested on 200 trials. For each pattern length L in {5, 6, 7, …, 25}, 100 different networks with newly instantiated weights were tested. PPT PowerPoint slide PNG larger image TIFF original image Download: Fig 6. Cognitive tasks. Schematic representation of the tasks and the input/output data structure for each of the cognitive tasks used to evaluate the performance of the BioESNs. Left: Memory capacity (MC) task, where the network receives a stream of random values as single input X and has several independent outputs Y (for simplicity, the example shows only two. Each output is memorized by an independent output neuron of the network and is supposed to recall the input at a specific time lag τ. The BioESNs were trained with 4000 time steps and tested on the subsequent 1000. Right: One trial of the sequence recall task. The network receives inputs X 1 , X 2 coming from a random sequence and a recall signal channel, respectively. There is only one output neuron, which after the recall signal channel indicates it (i.e., X 2 = 1) is supposed to reproduce the input received in the previous L steps, i.e., the pattern length parameter determining the difficulty of the task (for simplicity, in the scheme L = 2). The BioESNs were trained with 800 trials and tested on 200 trials. The score was computed considering only the recall phase in order to avoid inflation of the metric, given that the fixation periods were much easier to perform correctly. https://doi.org/10.1371/journal.pcbi.1010639.g006 Supporting information S1 Fig. Memory Capacity task. Echo state network hyperparameters grid search. Homogeneous interareal connectivity. Macaque connectome. Results of grid search over input scaling, leakage rate and spectral radius hyperparameters. For each parameter constellation, the boxplots show the aggregated validation scores of ten independently generated and trained reservoirs. https://doi.org/10.1371/journal.pcbi.1010639.s001 (EPS) S2 Fig. Memory Capacity task. Echo state network hyperparameters grid search. Heterogeneous interareal connectivity. Macaque connectome. Results of grid search over input scaling, leakage rate and spectral radius hyperparameters. For each parameter constellation, the boxplots show the aggregated validation scores of ten independently generated and trained reservoirs. https://doi.org/10.1371/journal.pcbi.1010639.s002 (EPS) S3 Fig. Memory Capacity task. Echo state network hyperparameters grid search. Homogeneous interareal connectivity. Marmoset connectome. Results of grid search over input scaling, leakage rate and spectral radius hyperparameters. For each parameter constellation, the boxplots show the aggregated validation scores of ten independently generated and trained reservoirs. https://doi.org/10.1371/journal.pcbi.1010639.s003 (EPS) S4 Fig. Memory Capacity task. Echo state network hyperparameters grid search. Heterogeneous interareal connectivity. Marmoset connectome. Results of grid search over input scaling, leakage rate and spectral radius hyperparameters. For each parameter constellation, the boxplots show the aggregated validation scores of ten independently generated and trained reservoirs. https://doi.org/10.1371/journal.pcbi.1010639.s004 (EPS) S5 Fig. Memory Capacity task. Echo state network hyperparameters grid search. Homogeneous interareal connectivity. Human connectome. Results of grid search over input scaling, leakage rate and spectral radius hyperparameters. For each parameter constellation, the boxplots show the aggregated validation scores of ten independently generated and trained reservoirs. https://doi.org/10.1371/journal.pcbi.1010639.s005 (EPS) S6 Fig. Memory Capacity task. Echo state network hyperparameters grid search. Heterogeneous interareal connectivity. Human connectome. Results of grid search over input scaling, leakage rate and spectral radius hyperparameters. For each parameter constellation, the boxplots show the aggregated validation scores of ten independently generated and trained reservoirs. https://doi.org/10.1371/journal.pcbi.1010639.s006 (EPS) Acknowledgments We thank Dr. Fatemeh Hadaeghi and Dr. Dong Li for fruitful discussions about ESNs and constructive feedback on the manuscript. [END] --- [1] Url: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010639 Published and (C) by PLOS One Content appears here under this condition or license: Creative Commons - Attribution BY 4.0. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/plosone/