> top > docs > PMC:2570085 > spans > 35486-36734

PMC:2570085 / 35486-36734 JSONTXT

A Novel Multiple Objective Optimization Framework for Constraining Conductance-Based Neuron Models by Experimental Data Abstract We present a novel framework for automatically constraining parameters of compartmental models of neurons, given a large set of experimentally measured responses of these neurons. In experiments, intrinsic noise gives rise to a large variability (e.g., in firing pattern) in the voltage responses to repetitions of the exact same input. Thus, the common approach of fitting models by attempting to perfectly replicate, point by point, a single chosen trace out of the spectrum of variable responses does not seem to do justice to the data. In addition, finding a single error function that faithfully characterizes the distance between two spiking traces is not a trivial pursuit. To address these issues, one can adopt a multiple objective optimization approach that allows the use of several error functions jointly. When more than one error function is available, the comparison between experimental voltage traces and model response can be performed on the basis of individual features of interest (e.g., spike rate, spike width). Each feature can be compared between model and experimental mean, in units of its experimental variability, thereby incorporating into the fitting this variability. We demonstrate the success of this approach, when used in conjunction with genetic algorithm optimization, in generating an excellent fit between model behavior and the firing pattern of two distinct electrical classes of cortical interneurons, accommodating and fast-spiking. We argue that the multiple, diverse models generated by this method could serve as the building blocks for the realistic simulation of large neuronal networks. Introduction Conductance-based compartmental models are increasingly used in the simulation of neuronal circuits (Brette et al., 2007; Herz et al., 2006; Traub et al., 2005). The main challenge in constructing such models that capture the firing pattern of neurons is constraining the density of the various membrane ion channels that play a major role in determining these firing patterns (Bekkers 2000a; Hille 2001). Presently, the lack of quantitative data implies that the density of a certain ion channel in a specific dendritic region is by and large a free parameter. Indeed, constraining these densities experimentally is not a trivial task to say the least. The development of molecular biology techniques (MacLean et al., 2003; Schulz et al., 2006, 2007; Toledo-Rodriguez et al., 2004), in combination with dynamic-clamp (Prinz et al., 2004a) recordings may eventually allow some of these parameters to be constrained experimentally. Yet to date, the dominant method is to record the in vitro experimental response of the cell to a set of simple current stimuli and then attempt to replicate the response in a detailed compartmental model of that cell (De Schutter and Bower, 1994; London and Hausser, 2005; Koch and Segev, 1998; Mainen et al., 1995; Rapp et al., 1996). Traditionally, by a process of educated guesswork and intuition, a set of values for the parameters describing the different ion channels that may exist in the neuron membrane is suggested and the model performance is compared to the actual experimental data. This process is repeated until a satisfactory match between the model and the experiment is attained. As computers become more powerful and clusters of processors increasingly common, the computational resources available to a modeler steadily increase. Thus, the possibility of harnessing these resources to the task of constraining parameters of conductance-based compartmental models seems very lucrative. However, the crux of the matter is that now the evaluation of the quality of a simulation is left to an algorithm. The highly sophisticated comparison between a model performance and experimental trace(s) that the trained modeler performs by eye must be reduced to some formula. Previous studies have explored the feasibility of constraining detailed compartmental models using automated methods of various kinds (Achard and De Schutter, 2006; Keren et al., 2005; Vanier and Bower, 1999). These studies mostly focused on fitting parameters of a compartmental model to data generated by the very same model given a specific value for its parameters (but see (Shen et al., 1999)). As the models that generated the target data contained no intrinsic variability, the comparison between simulation and target data was done on a direct trace to trace basis. In experiments, however, when the exactly same stimulus is repeated several times, the voltage traces elicited differ among themselves to a significant degree (Mainen and Sejnowski, 1995; Nowak et al., 1997). Since the target data traces themselves are variable and selecting but one of the traces must to some extent be arbitrary, a direct trace to trace comparison between single traces might not serve as the best method of comparison between experiment and model. Indeed, this intrinsic variability (“noise”) may have an important functional role (Schneidman et al., 1998). Therefore, we propose extracting certain features of the voltage response to a stimulus (such as the number of spikes or first spike latency) along with their intrinsic variability rather than using the voltage trace itself directly. As demonstrated in the present study, these features can then be used as the basis of the comparison between model and experiment. Using a very different technique, yet in a similar spirit, (Prinz et al., 2003) segregated the behavior of a large set of models generated by laying out a grid in parameter space into four main categories of electrical activity as observed across many experiments in lobster stomatogastric neurons (see (Goldman et al., 2001; Prinz et al., 2004b; Taylor et al., 2006)). We utilize an optimization method named multiple objective optimization (or MOO, Cohon, 1985; Hwang and Masud, 1979) that allows for several error functions corresponding to several features of the voltage response to be employed jointly and searches for the optimal trade-offs between them. Using this optimization technique, feature-based comparisons can be employed to arrive at a model that captures the mean of experimental responses in a fashion that accounts for their intrinsic variability. We exemplify the use of this technique by applying it to the concrete task of modeling the firing pattern of two electrical classes of inhibitory neocortical interneurons, the fast spiking and the accommodating, as recorded in vitro by (Markram et al., 2004). We demonstrate that this novel approach yields an excellent match between model and experiments and argue that the multiple, diverse models generated by this method for each neuron class (incorporating the inherent variability of neurons) could serve successfully as the building blocks for large networks simulations . Materials and Methods Every fitting attempt between model performance and experimental data consists of three basic elements: A target data set (and the stimuli that generated it), a model with its free parameters (and their range), and the search method. The result of the fitting procedure is a solution (or sometimes a set of solutions) of varying quality, as quantified by an error (or the distance) between model performance and the target experimental data. Search algorithm Examples of search algorithms include, simulated annealing (Kirkpatrick et al., 1983), evolutionary strategies algorithms (Mitchell, 1998), conjugate-gradient (Press, 2002) and others. Cases of error functions comprise mean square error, trajectory density (LeMasson, 2001), spike train metrics (Victor and Purpura, 1996), and more (These two elements are by and large independent of one another i.e., almost any error function can be used by almost any search algorithm. Thus, we address the two issues separately). In this study, we chose to use evolutionary algorithms. This class of algorithms was shown by Vanier and Bower (Vanier and Bower, 1999) to be an effective method for constraining conductance-based compartmental models. Our choice was motivated by the nature of these search algorithms that explore many solutions simultaneously and are naturally compatible with use on parallel computers. Briefly, an evolutionary algorithm is an iterative optimization algorithm that derives its inspiration from abstracted notions of fitness improvement through biological evolution. In each iteration (generation), the algorithm calculates the value of the target function (fitness) of numerous solutions (organisms). The set of all solutions (the population) is then considered. The best solutions are selected to pass over (breed) and be used in the next iteration. Solutions are not transferred intact from iteration to iteration but rather randomly changed (mutated) in various fashions. This process of evaluation, selection, and new solution generation is continued until a certain criterion for the quality of fitness between model performance and experimental results is fulfilled or the allotted iteration number has been reached. Among the many variants of such algorithms, we decided to use a custom made version of the elitist non-dominated sorting genetic algorithm (NSGA-II) (Deb et al., 2002) that we implemented in NEURON (Carnevale and Hines, 2005). We use real-value parameters. The mutation we used was a time diminishing non-uniform mutation (Michalewicz, 1992). Namely, the mutation changes the value of the current parameter by an amount within a range that diminishes with time (subject to parameter boundaries). The crossover scheme implemented is named simulated binary crossover (SBX) (Deb and Agarawal, 1995) and aims at replicating the effect of the standard crossover operation in a binary genetic algorithm. Thus, an offspring will have different parameter values taken from each of the two progenitors and some might be slightly modified. Lastly, we introduced a sharing function (Goldberg and Richarson, 1987) to encourage population diversity. This function degrades the fitness of each solution according to the number of solutions within a predefined distance. Thus, it improves the chances of a slightly less fit, yet distinct solution to survive and propagate. Our tests of different mutation and crossover operators show that these indeed affect the speed of progression toward good solutions but for most forms of operators the fitting eventually converged to similar degrees of success. Error functions In this study, rather than suggesting a single optimal error function (which might not even be possible to define in the most general case), we adopt a strategy that allows several potentially conflicting error functions to be used jointly without being forced to assign a relative weight to each one. This method is entitled multiple objective optimization (Cohon, 1985; Deb, 2001; Hwang and Masud, 1979). It arose naturally in engineering where one would like to design, for instance, a steel beam that is both strong (one objective) and light (second objective). These two objectives potentially clash and are difficult to weigh a priori without knowing the precise trade-off. In brief, an optimization problem is defined as a MOO problem (MOOP) if more than one error function is used and one considers them in parallel, not by simply summing them. The main difference between single objective optimization that has been previously used (Achard and De Schutter, 2006; Keren et al., 2005; Vanier and Bower, 1999) and the MOO is in the possible relations between two solutions. In a single objective problem, a solution can be either better or worse than another, depending on whether its error value is lower or higher. This is not the case in multiple objective problems. The relation of better or worse is replaced by that of domination. One solution dominates another if it does better than the other solution in at least one objective and not worse than the other solution in all other objectives. If there are M objective functions fj(x), j = 1 … M, then a solution x1 is said to dominate a solution x2 if both the following conditions hold, (1) fjx1≤fjx2 for all j=1…M (2) fkx1In contrast, the pareto front in Figure 4B (accommodation index vs. spike rate) is not parallel to the axes. Thus, some of the points along its perimeter will have lower values of one feature but higher values of the other one (e.g., the points marked by the three arrows). As the minimal value of one feature can be achieved only by accepting a value of the other objective higher than its minimum value, some trade-off exists between these two features. Therefore, different decisions on the relative importance of the two features will result in different points on the pareto front considered as the most desired model. Accordingly, if one wishes to sum the two errors and sort the solutions on the pareto front by the value of this sum, different weighings of the two objectives will result in different models considered as the minimum of the sum. The black arrow in Figure 4B marks the point considered to be the minimum by an equal weighting. Alternatively, putting more emphasis on the error in spike rate (x-axis) would select a point such as that marked by the light green arrow (top-left). Conversely, weighing the error in accommodation (y-axis) more heavily will favor a point such as that marked by a dark green arrow (bottom-right). Figures 4C and 4D serves to demonstrate the effect of selecting different preferences regarding the two features. Figure 4C shows the response of the model that corresponds to the error values marked by a light green arrow in Figure 4B (upper-left) and Figure 4D shows the model response corresponding to the error marked by the dark green arrow (lower-right), both to a 150 pA, 2 seconds depolarizing step current. As can be seen, spike accommodation of the light green model trace (Figure 4C) is less pronounced than that of the dark green model trace (Figure 4D). This is the reflection of the fact that the error value for accommodation of the dark green model is smaller than that of the light green model. On the other hand, in the light green model, more weight was given to the spike rate feature. Indeed, the light green trace has 14 APs (exactly corresponding to the experimental mean rate of seven spikes per second) while the dark green has only 13 APs. Note that similar variability as depicted in these two model-generated green traces may be found in the experimental traces for the same cell and same depolarizing step. The light green model trace is more reminiscent of the first experimental trace of the accommodating cell (Figure 1A upper-left) whereas the dark green trace is more similar to the second (Figure 1A upper-right). Figure 5 portrays the spread of model parameter values at the end of the fitting for both electrical classes (red – accommodating; blue – fast-spiking). Each of the 300 parameter sets (each set composed of 12 parameters – the maximal conductance of the different ion channels) at the final iteration of a fitting that passes a quality criterion is represented as a circle. The criterion in this case was an error of less than 2 SD in each feature. The circle is plotted at a point corresponding to the value of the selected channel conductance normalized by the range allowed for that conductance (see Methods). Since there are 12 parameters, it is difficult to visualize their location in the full 12-dimensional space. Thus, for illustration, we project the parameter values of all models onto a three-dimensional subspace (Figures 5A and 5B) and to one-dimensional space for each of the 12 parameters (Figure 5C). Note that many solutions overlap. Hence, the number of circles (for each class of firing types) might seem to be smaller than 300. In Figure 5D, a subset of only seven solutions depicted in Figure 5C is displayed with different colored lines connecting the parameters values of each acceptable solution. Figure 5 Parameter values of acceptable solutions. A–B. Each of the 300 parameter sets at the final iteration of a fitting attempt that has an error of less than 2 SD in each objective is represented as a circle. The circle is plotted at the point corresponding to the normalized value of the selected channel conductance. Red circles represent models of the accommodating neuron and blue represent models of the fast-spiking neuron. Plotted are the results of two out of the three repetitions of the fitting attempt shown in Figure 3. (A) The channels selected were: Nat, Nap, Kv3.1. (B) The channels selected were: Leaksoma, Im, IA. (C) As in A and B, but here each of the parameter set for each of the 300 acceptable solution at the 1000th generation is depicted as a circle on a single dimensional plot. The circle is plotted at the point corresponding to the normalized value of the channel conductance. Note that even when projected onto a single dimension the two electrical behaviors occupy separate regions for some of the channels. (D) A subset of seven of the parameter sets of the accommodating behavior displayed in red in B is depicted. Lines connect each set of parameters that correspond to an acceptable solution. Thus, each individual line represents the full parameter set of a single model. A few observations can be made considering Figures 5A–C. First, there are many combinations of parameters that give rise to acceptable solutions (i.e. non-uniqueness, see (Golowasch et al., 2002; Keren et al., 2005; Prinz et al., 2003) and see below). Second, confinement of the parameters for the two different modeled classes (red vs. blue) to segregated regions of the parameter space can be seen both in the three-dimensional space (Figure 5A) and even in some of the single dimensional projections (for instance Nat in Figure 5C). While for other subspaces, both in three dimensions (Figure 5B) and single dimensions (for instance Im in Figure 5C), the regions corresponding to the electrical classes are more intermixed. Third, for some channel types, successful solutions appear all across the parameter range (e.g., SK or Im) whereas for other channel types (e.g., Ih or Kfast) successful solutions appear to be restricted to a limited range of parameter values. Figure 6 portrays the experimental (red) versus model (green) variability. The source of the variability of the in vitro neurons is most likely due to the stochastic nature of the ion channels (Mainen and Sejnowski, 1995; Schneidman et al., 1998). In contrast, the in silico neurons are deterministic and have no internal variability. Yet, if one considers the full group of models generated to fit one cell, differences in the channel conductances may bring about similar variability. Thus, even though the sources of the variability are disparate, the range of models may be able to capture the experimental variability. As the number of repetitions performed experimentally was low, we normalize the values of the different repetitions of each cell to the mean and SD of that cell and pool all five cells together. As can be seen in Figure 6, the models (green circles) manage to capture nearly the entire range of experimental variability (red circles) with minimal bias. Figure 6 Experimental versus model variability – accommodating neurons. For each of the five cells shown in Table 2, the values of each feature for the 225 pA step current is extracted for all repetitions. The mean of the feature value for each cell is subtracted and the result divided by the SD to arrive at the normalized distance from mean. All repetitions of all five accommodating cells have been pooled together and are displayed for each feature separately (red circles). The same process is repeated for the single repetition available for each of the 300 parameter sets that passed the 2 SD criterion (green circles) at the end of the fitting process of the accommodating exemplar (AC 5 Table 2). Discussion In this study, we have proposed a novel framework for constraining conductance-based compartmental models. Its central notion is that rather than trying to reduce the complex task of automated comparison between experimental firing patters of neurons and simulation results to a single distance parameter, one should adopt a multiple objective approach. Such an approach enables one to employ jointly more than a single error function, each comparing a different aspect (or feature) of the experimental and model data sets. Different features of the response (e.g., spike rate, spike height, spike timing, etc.) could be chosen according to the aim of the specific modeling effort. The mean and SD of each feature is then extracted from the noisy experimental results, allowing one to assess the quality of the match between model and experiment in meaningful units of the experimental SD. This framework generates a group of acceptable models that collectively represent both the mean and the variance of the experimental dataset. Whereas each individual model is still deterministic and will represent only a single instance of the experimental response, as a group the models capture the variability found experimentally. Single versus multiple objective optimization In order to assess the quality of the match between two voltage traces (e.g., experimental vs. simulated) that exhibit spiking behavior, different distance functions have been considered (Keren et al., 2005; Victor and Purpura, 1996). These studies seek for a single error function to describe the quality of the match between the two traces. However, given the complicated nature of this comparison, one distance function might not suffice. For instance, while the trajectory-density error function accounts for the form of the voltage trace it excludes the time parameter (LeMasson and Maex, 2001). Yet, one would also like to capture aspects of timing such as the first spike latency or the degree of accommodation. In order to accomplish this, an additional error function must be introduced. One may still sum these different error functions to obtain a single value (Keren et al., 2005); however, this potentially makes it difficult to have the different error functions contribute equally to the sum, as there is no guarantee that the error values are in the same range or magnitude. Hence, they would require being normalized one against the other. The problem is even more acute if one wishes to fit multiple stimuli with multiple error functions. Even if the above-mentioned normalization can be accomplished, the relative contribution of each of the error functions to the final error value must be assigned when they are summed. Yet, it seems very difficult to assign a specific value to the relative importance of two different stimuli, for instance, a depolarizing ramp and depolarizing step current. How would one decide which of them should contribute more to the overall error? Lastly, one must also account for the fact that different models are used for different purposes, placing emphasis on diverse aspects of the model. For instance, in some cases it might be particularly important to match the first spike latency as accurately as possible (e.g., in models of the early visual system) while in other cases one might assign more importance to the overall spike rate. Using MOO, the error values of different error functions need not be summed and the problem of error summation is never encountered. Feature based error functions We opted for feature-based error functions for suprathreshold depolarizing current steps for three main reasons. (i) Their ability to take into account the experimental intrinsic variability; (ii) the clear demarcation of the electrical classes that they provide (Table 1); and (iii) the ease of interpreting the final fitting results (i.e., the errors measured in SD that have a direct experimental meaning). Of course, using MOO one can employ any combination of direct comparison (e.g., mean square error) and feature-based error functions without being concerned by the fact that they return very different error values. While MOO provides clear advantages over single objective optimization, the choice of the appropriate error functions must still be guided by the specific modeling effort. Different stimuli will be well addressed by different error functions. For instance, though mean square error is well known to be a poor option for depolarizing step currents that cause the model to spike (LeMasson and Maex, 2001), it is a reasonable measure when the stimulus is a hyperpolarizing current. The main disadvantage of direct comparison (as opposed to feature-based errors) is the difficulty to incorporate the intrinsic variability of the experimental responses. Calculating the mean of the raw voltage responses will result in an unreasonable trace and any selection of a single trace must be to some degree arbitrary. A second disadvantage is the fact that direct comparison assigns an equal weight to every voltage point which might lead to unequal weighting of different features. For instance, the peak of a spike will be represented by very few voltage points (as it is brief in time) while the AHP will include many more points. Thus, a point-by-point comparison will allot more weight to a discrepancy in the AHP depth than in the AP height. A final disadvantage is that the error value returned by a direct comparison is an arbitrary number that is difficult to interpret. This makes judging the final quality of a model a complicated matter. Interpreting the end result of a multi-objective fitting procedure At the end of a MOO fitting procedure, one is presented with a set of solutions. For each of the solutions, the value of the different parameters (in our case the maximal conductance of the channel types) and the error values for all features are provided. After a threshold for the acceptance of solution is selected (e.g., an error of two SDs or less) one remains with a set of points, in parameter and error space, deemed successful that must be interpreted. The location of successful solutions in error space can be used to plot the pareto fronts that in turn map which objectives are in conflict with one another. This allows one to pinpoint where the model is still lacking (or which combination of objectives yet presents a more significant challenge for the model). The nature of the conflicting objectives might also suggest what could be modified in the model to overcome this conflict. For instance, if the value of the AHP is in conflict with the number of spikes, perhaps one type of potassium channel is determining both features, and thus another type of potassium channel may be added to allow minimization of the error in the two objectives simultaneously. Note that this type of information is completely lost if one uses single-objective optimization. Furthermore, knowing which objectives are in conflict with one another is particularly important if one wishes to collapse two objectives into one by summing their corresponding error values. As noted in Figure 4 above, if objectives are not conflicting, then their exact relative weighting will not drastically affect the point considered as a minimum of their sum. However if they are conflicting an algorithm that tries to minimize their sum will be driven toward different minima according to the weighting of the objectives. One may employ the spread of satisfactory models in parameter space to probe the dynamics underlying the models of a given electrical class. Yet, the functional interpretation of the spread of maximal conductance of channels for all acceptable solutions across their allowed range is not trivial (Figure 5) (Prinz et al., 2003; Taylor et al., 2006). One basic intuition is that if the value of a parameter is restricted to a small range then the channel it represents must be critical for the model behavior. Conversely, if a parameter is spread all across the parameter range, i.e., acceptable solutions can be achieved with any value of this parameter, the relevant channel contributes little to the model dynamics. Care should be taken when following this intuition since the interactions between the different parameters must be taken into account. For instance, if the model behavior critically depends on a sum of two parameters rather than their individual values (e.g., Na + K conductances) then the sum could be achieved by many combinations. Thus, while each of these channels is critical, the range of their values across successful solutions might be quite wide. Similar considerations hold for different types of correlations between the variables. With the caveat mentioned above, it is still tempting to assume that those parameters that have different segregated values in parameter space are those that are responsible for the difference in the dynamics of the two classes (accommodating and fast spiking) studied hereby. This issue should be explored in future studies. Second, the fact that there are no parameters for which one can find many solutions crowded on the upper part of the range and nowhere else (Figure 5C) suggests that we have picked a parameter range that does not limit the models. Lastly, one must interpret the range of non-unique solutions. There have been many studies on the subject of regulation of neuronal activity and its relation to cellular parameters both experimental and computational (for a comprehensive review see Marder and Goaillard, 2006). Before we discuss the results of our study, we deem it important to distinguish between two types of non-uniqueness. The first is non-uniqueness of the model itself, namely, a situation in which two different parameter sets result in the exact same model behavior. The second is non-uniqueness introduced by the error functions, i.e., when two dissimilar model behaviors yield the same error value. For example, consider an error function that only evaluates the overall spike rate in a given stimulus. In this case, every solution resulting in the same number of spikes (clearly a large group) will receive the same error. Thus, in terms of the algorithm all these solutions will be equally acceptable, non-unique solutions. Our study shows that if one attempts to incorporate the experimental variability in the fashion of this study a wide range of parameters can be construed as successful solutions. This serves to highlight that when comparing results of different fitting studies or when attempting to relate the results of computational studies to experiment, care must be taken to account for the method used to determine under what conditions two solutions are considered to be non-unique as it strongly affects the results yet might be over or under restrictive. In summary, the set of models deemed ultimately successful will depend on the non-uniqueness of the dynamics of the model itself (first kind) but just as importantly on the error function chosen (second kind). We note that one could constrain the number of non-unique models to a great degree by forcing them to fall in accordance with the shape of a single voltage trace. However, since the specific voltage profile is intrinsically variable, this might not be the appropriate way of reducing the number of solutions. By using an error function that attempts to capture certain features but does not constrain by one particular voltage trace, the acceptable models we found of the two electrical classes occupy, at least for some model parameters, a few significantly sized “clouds” in parameter space (Figure 5A). Each cloud viewed on its own seems fairly continuous and the two electrical behaviors are well separated in these parameter spaces. Since at this point the model was constrained using only limited data (recordings only from the soma, one kind of (step current) stimulation, etc.) we view the fairly large and dense space of successful solutions as an important result, as it leaves room for further constraining of the model with additional data. Indeed, we propose that one should attempt to separately fit the model to significantly different types of stimuli (ramp current, sinusoidal current, voltage clamp, etc.) applied to the same cell and then examine the overlap of the solutions for different stimuli in parameter space. A number of important experimental studies have explored the relation between single channel expression and the firing properties of neurons (MacLean et al., 2003; Toledo-Rodriguez et al., 2004; Schulz et al., 2006, 2007). The results of (Schulz et al., 2006, 2007) show that although the conductance of some channels may vary several fold, the pattern of expression of certain channels can still serve to distinguish between different cell types. By incorporating the experimental variability into our fitting method instead of fitting to a single voltage trace, we arrive at similar results. Namely, we find a wide parameter range that produces acceptable solutions for each class, yet the two classes are clearly distinct in some of the subspaces of the full parameter space. Future research This framework opens up many interesting avenues of inquiry. Ongoing research at our laboratory aims at connecting the constraints imposed by single cell gene expression on the type of ion channels for a given electrical class (Toledo-Rodriguez et al., 2004) to this fitting framework. Another challenge is generalizing this framework to additional stimuli (ramp, oscillatory input, etc.) and additional features that were not used in the present study (e.g., “burstiness,” see Figure 7). Another interesting issue is the feasibility of finding an optimal stimulus (or a minimal set of stimuli) alongside with the corresponding set of error functions that, when used jointly with MOO, yield a model that captures the experimental behavior for a large repertoire of stimuli that were not used during the fitting procedure. The relation of the features of such an optimal stimulus set to those features found experimentally to have an important discriminatory role among the electrical classes (Toledo-Rodriguez et al., 2004) is yet of further interest. Yet another question is, in what fashion does taking the experimental variability into account, as we did, affect the shape of the landscape of acceptable solutions in parameter space (the “non-uniqueness” problem)? As the main purpose of this study was to present a novel fitting framework, these issues were left for a future effort. Figure 7 A proof of principle: generating an additional electrical class – stuttering neurons. (A) Experimental response of a stuttering interneuron to 2 seconds long, 150 pA depolarizing current (Markram et al., 2004). (B) Response of a model for this cell type to the same current input. A feature that measures the number of pauses in the firing response has been added to the other six features used before. This demonstrates that with this additional feature one can obtain a qualitative fit of the stuttering electrical class. The values of the channel conductances (in mS/cm2) obtained in this fit are: Nat = 479; Nap = 0; Kfast = 482; Kslow = 477; IA = 99; Kv3.1 = 514; Ca = 6.57; SK = 85.4; Ih = 0; Im = 2.36; Leaksoma = 0.00684; Leakdendrite = 0.015. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that should be construed as a potential conflict of interest. We would like to thank Yael Bitterman for valuable discussions; Maria Toledo-Rodriguez for the experimental data and discussions regarding features; Phil Goodman for discussions regarding the selection of features; Srikanth Ramaswamy and Rajnish Ranjan for assistance with the literature on ion channels; Sean Hill for discussions regarding the manuscript. We also thank the two referees whose comments and suggestions helped to improve this manuscript significantly.

Document structure show

projects that have annotations to this span

There is no project