PMC:4342829 / 4105-6747 JSONTXT

Annnotations TAB JSON ListView MergeView

    2_test

    {"project":"2_test","denotations":[{"id":"25880925-15342560-14893487","span":{"begin":770,"end":772},"obj":"15342560"},{"id":"25880925-17925358-14893487","span":{"begin":770,"end":772},"obj":"17925358"},{"id":"25880925-19176548-14893487","span":{"begin":770,"end":772},"obj":"19176548"},{"id":"25880925-19188190-14893487","span":{"begin":770,"end":772},"obj":"19188190"},{"id":"25880925-19183003-14893487","span":{"begin":770,"end":772},"obj":"19183003"},{"id":"25880925-21697125-14893487","span":{"begin":770,"end":772},"obj":"21697125"},{"id":"25880925-17925358-14893488","span":{"begin":1132,"end":1134},"obj":"17925358"},{"id":"25880925-19188190-14893489","span":{"begin":1214,"end":1216},"obj":"19188190"},{"id":"25880925-15342560-14893490","span":{"begin":1392,"end":1394},"obj":"15342560"},{"id":"25880925-19176548-14893491","span":{"begin":1729,"end":1731},"obj":"19176548"},{"id":"25880925-21697125-14893492","span":{"begin":1972,"end":1974},"obj":"21697125"},{"id":"25880925-19183003-14893493","span":{"begin":2167,"end":2169},"obj":"19183003"},{"id":"25880925-24507381-14893494","span":{"begin":2283,"end":2285},"obj":"24507381"},{"id":"25880925-23163805-14893495","span":{"begin":2429,"end":2431},"obj":"23163805"}],"text":"Many methods have been presented for this task, but less effort has been devoted to their critical evaluation. It is clear, however, that to make progress in this research area it is essential to assess performance of the different algorithms quantitatively, in order to understand their weaknesses and strengths. Furthermore, if a new algorithm is to be accepted as a valuable addition to the state of the art, it must be first rigorously compared with the existing plethora of methods. This systematic comparison requires adequate benchmark problems, that is, reference calibration case studies of realistic size and nature that can be easily used by the community. Several collections of benchmarks – and of methods for generating them – have already been published [13-19]. An artificial gene network generator, which allows to choose from different topologies, was presented in [13]. The system, known as A-BIOCHEM, generates pseudo-experimental noisy data in silico, simulating microarray experiments. An artificial gene network with ten genes generated in this way was later used to compare four reverse-engineering methods [15]. More recently, a toolkit called GRENDEL was presented with the same purpose [17], including several refinements in order to increase the biological realism of the benchmark. A reverse-engineering benchmark of a small biochemical network was presented in [14]. The model describes organism growth in a bioreactor and the focus was placed on model discrimination using measurements of some intracellular components. A proposal for minimum requirements of problem specifications, along with a collection of 44 small benchmarks for ODE model identification of cellular systems, was presented in [16]. The collection includes parameter estimation problems as well as combined parameter and structure inference problems. Another method for generation of dynamical models of gene regulatory networks to be used as benchmarksis GeneNetWeaver [19], which was used to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5) [18]. Subsequent competitions (DREAM6, DREAM7) included also parameter estimation challenges of medium-scale models [20]. Similar efforts have been carried out in related areas, such as in optimization, where BBOB workshops (Black-Box Optimization Benchmarking, [21]) have been organised since 2009. In this context it is also worth mentioning the collection of large-scale, nonlinearly constrained optimization problems from the physical sciences and engineering (COPS) [22]."}