Skip to content

Optimize a neuron model

Introduction#

CompNeuroPy provides the OptNeuron class which can be used to define your optimization of an ANNarchy neuron model (tuning the parameters). You can either optimize your neuron model to some data or try to reproduce the dynamics of a different neuron model (for example to reduce a more complex model). In both cases, you have to define the experiment which generates the data of interest with your neuron model.

Warning

OptNeuron has to be imported from "CompNeuroPy.opt_neuron" and you have to install torch, sbi, pybads and hyperopt (e.g. pip install torch sbi pybads hyperopt) separately.

Used optimization methods:

  • hyperopt (using the Tree of Parzen Estimators (TPE))

    • Bergstra, J., Yamins, D., Cox, D. D. (2013) Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. TProc. of the 30th International Conference on Machine Learning (ICML 2013), June 2013, pp. I-115 to I-23. pdf
  • sbi

  • deap (using the CMAES strategy)

    • Fortin, F. A., De Rainville, F. M., Gardner, M. A. G., Parizeau, M., & Gagné, C. (2012). DEAP: Evolutionary algorithms made easy. The Journal of Machine Learning Research, 13(1), 2171-2175. pdf
  • pybads

    • Singh, G. S., & Acerbi, L. (2023). PyBADS: Fast and robust black-box optimization in Python. arXiv preprint arXiv:2306.15576.
    • Acerbi, L., & Ma, W. J. (2017). Practical Bayesian optimization for model fitting with Bayesian adaptive direct search. Advances in neural information processing systems, 30. pdf

Example:#

opt = OptNeuron(
    experiment=my_exp,
    get_loss_function=get_loss,
    variables_bounds=variables_bounds,
    results_soll=experimental_data["results_soll"],
    time_step=experimental_data["time_step"],
    compile_folder_name="annarchy_opt_neuron_example",
    neuron_model=my_neuron,
    method="hyperopt",
    record=["r"],
)

A full example is available in the Examples.

Run the optimization#

To run the optimization simply call the run() function of the OptNeuron object.

Define the experiment#

You have to define a CompNeuroExp object containing a run() function. In the run() function simulations and recordings are performed.

Warning

While defining the CompNeuroExp run() function for the optimization with OptNeuron you must observe the following rules:

  • the run() function has to take a single argument (besides self) which contains the name of the population consiting of a single neuron or multiple neurons of the optimized neuron model (you can use this to access the population)
  • thus, the simulation has to be compatible with a population consisting of a single or multiple neurons
  • use the self.reset() function within the run() function to create new recording chunks and reset the model parameters/variables!
  • OptNeuron automatically sets the parameters/variables defined in the variables_bounds before each run, self.reset() will reset the model to this state (all parameters/variables not defined in variables_bounds are reset to compile state)
  • be aware that the target neuron model is always resetted to compile state (this affects results_soll)!
  • using self.reset(parameters=False) in the run() function keeps all parameter changes you do during the experiment
  • start the monitors before you want to record something by calling self.monitors.start()
  • the best parameters, the corresponding loss, and the corresponding results of the experiment will be available after the optimization, you can store any additional data which should be available after optimiozation in the self.data attribute
  • do not call the functions store_model_state() and reset_model_state() of the CompNeuroExp class within the run() function!

Example:#

class my_exp(CompNeuroExp):
    """
    Define an experiment by inheriting from CompNeuroExp.

    CompNeuroExp provides the attributes:

        monitors (CompNeuroMonitors):
            a CompNeuroMonitors object to do recordings, define during init otherwise
            None
        data (dict):
            a dictionary for storing any optional data

    and the functions:
        reset():
            resets the model and monitors
        results():
            returns a results object
    """

    def run(self, population_name):
        """
        Do the simulations and recordings.

        To use the CompNeuroExp class, you need to define a run function which
        does the simulations and recordings. The run function should return the
        results object which can be obtained by calling self.results().

        For using the CompNeuroExp for OptNeuron, the run function should have
        one argument which is the name of the population which is automatically created
        by OptNeuron, containing a single neuron of the model which should be optimized.

        Args:
            population_name (str):
                name of the population which contains a single neuron, this will be
                automatically provided by opt_neuron

        Returns:
            results (CompNeuroExp._ResultsCl):
                results object with attributes:
                    recordings (list):
                        list of recordings
                    recording_times (recording_times_cl):
                        recording times object
                    mon_dict (dict):
                        dict of recorded variables of the monitors
                    data (dict):
                        dict with optional data stored during the experiment
        """
        ### you have to start monitors within the run function, otherwise nothing will
        ### be recorded
        self.monitors.start()

        ### run the simulation, if you reset the monitors/model the model_state argument
        ### has to be True (Default)
        ...
        simulate(100)
        self.reset()
        ...

        ### optional: store anything you want in the data dict. For example infomration
        ### about the simulations. This is not used for the optimization but can be
        ### retrieved after the optimization is finished
        self.data["sim"] = sim_step.simulation_info()
        self.data["population_name"] = population_name
        self.data["time_step"] = dt()

        ### return results, use the object's self.results()
        return self.results()

The get_loss_function#

The get_loss_function must have two arguments. When this function is called during optimization, the first argument is always the results object returned by the experiment, i.e. the results of the neuron you want to optimize. The second argument depends on whether you have specified results_soll, i.e. data to be reproduced by the neuron_model, or whether you have specified a target_neuron_model whose results are to be reproduced by the neuron_model. Thus, the second argument is either results_soll provided to the OptNeuron class during initialization or another results object (returned by the CompNeuroExp run function), generated with the target_neuron_model.

Warning

You always have to work with the neuron rank 0 within the get_loss_function!

Example:#

In this example we assume, that results_soll was provided during initialization of the OptNeuron class (no target_neuron_model used).

def get_loss(results_ist: CompNeuroExp._ResultsCl, results_soll):
    """
    Function which has to have the arguments results_ist and results_soll and should
    calculates and return the loss. This structure is needed for the OptNeuron class.

    Args:
        results_ist (object):
            the results object returned by the run function of experiment (see above)
        results_soll (any):
            the target data directly provided to OptNeuron during initialization

    Returns:
        loss (float or list of floats):
            the loss
    """
    ### get the recordings and other important things for calculating the loss from
    ### results_ist, we do not use all available information here, but you could
    rec_ist = results_ist.recordings
    pop_ist = results_ist.data["population_name"]
    neuron = 0

    ### get the data for calculating the loss from the results_soll
    r_target_0 = results_soll[0]
    r_target_1 = results_soll[1]

    ### get the data for calculating the loss from the recordings
    r_ist_0 = rec_ist[0][f"{pop_ist};r"][:, neuron]
    r_ist_1 = rec_ist[1][f"{pop_ist};r"][:, neuron]

    ### calculate the loss, e.g. the root mean squared error
    rmse1 = rmse(r_target_0, r_ist_0)
    rmse2 = rmse(r_target_1, r_ist_1)

    ### return the loss, one can return a singel value or a list of values which will
    ### be summed during the optimization
    return [rmse1, rmse2]

CompNeuroPy.opt_neuron.OptNeuron #

This class is used to optimize neuron models with ANNarchy.

Source code in src/CompNeuroPy/opt_neuron.py
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
class OptNeuron:
    """
    This class is used to optimize neuron models with ANNarchy.
    """

    opt_created = []

    @check_types(warnings=False)
    def __init__(
        self,
        experiment: Type[CompNeuroExp],
        get_loss_function: Callable[[Any, Any], float | list[float]],
        variables_bounds: dict[str, float | str | list[float | str]],
        neuron_model: Neuron,
        results_soll: Any | None = None,
        target_neuron_model: Neuron | None = None,
        time_step: float = 1.0,
        recording_period: float | None = None,
        compile_folder_name: str = "annarchy_OptNeuron",
        num_rep_loss: int = 1,
        method: str = "deap",
        prior=None,
        fv_space: list = None,
        record: list[str] = [],
        cma_params_dict: dict = {},
        bads_params_dict: dict = {},
        source_solutions: list[tuple[np.ndarray, float]] = [],
        variables_bounds_guess: None | dict[str, list[float]] = None,
        verbose=False,
    ):
        """
        This prepares the optimization. To run the optimization call the run function.

        Args:
            experiment (CompNeuroExp class):
                CompNeuroExp class containing a 'run' function which defines the
                simulations and recordings
            get_loss_function (function):
                function which takes results_ist and results_soll as arguments and
                calculates/returns the loss
            variables_bounds (dict):
                Dictionary with parameter names (keys) and their bounds (values). If
                single values are given as values, the parameter is constant, i.e., not
                optimized. If a list is given as value, the parameter is optimized and
                the list contains the lower and upper bound of the parameter (order is
                not important). If strings instead of numbers are given, the string is
                interpreted as an mathematical expression which is evaluated with the
                other parameter values (i.e. {"x":[0,1],"dxy":[-1,1],"y":"x+dxy","z":5}).
            neuron_model (ANNarchy Neuron):
                The neuron model whose parameters should be optimized.
            results_soll (Any, optional):
                Some variable which contains the target data and can be used by the
                get_loss_function (second argument of get_loss_function)
                !!! warning
                    Either provide results_soll or a target_neuron_model not both!
                Default: None.
            target_neuron_model (ANNarchy Neuron, optional):
                The neuron model which produces the target data by running the
                experiment.
                !!! warning
                    Either provide results_soll or a target_neuron_model not both!
                Default: None.
            time_step (float, optional):
                The time step for the simulation in ms. Default: 1.
            recording_period (float, optional):
                The recording period for the simulation in ms. Default: None, i.e., the
                time_step is used.
            compile_folder_name (string, optional):
                The name of the annarchy compilation folder within annarchy_folders/.
                Default: 'annarchy_OptNeuron'.
            num_rep_loss (int, optional):
                Only interesting for noisy simulations/models. How often should the
                simulaiton be run to calculate the loss (the defined number of losses
                is obtained and averaged). Default: 1.
            method (str, optional):
                Either 'deap', 'sbi', or 'hyperopt'. Defines the tool which is used for
                optimization. Default: 'deap'.
            prior (distribution, optional):
                The prior distribution used by sbi. Default: None, i.e., uniform
                distributions between the variable bounds are assumed.
            fv_space (list, optional):
                The search space for hyperopt. Default: None, i.e., uniform
                distributions between the variable bounds are assumed.
            record (list, optional):
                List of strings which define what variables of the tuned neuron should
                be recorded. Default: [].
            cma_params_dict (dict, optional):
                Dictionary with parameters for the deap.cma.Strategy. Default: {}.
                See [here](https://deap.readthedocs.io/en/master/api/algo.html#deap.cma.Strategy) for more information.
            bads_params_dict (dict, optional):
                Dictionary with parameters for the bads optimization. Default: {}.
                See [here](https://acerbilab.github.io/pybads/api/options/bads_options.html) for more information.
            source_solutions (list, optional):
                List of tuples with the source solutions. Each tuple contains a numpy
                array with the parameter values and the loss. Used for initialization of
                cma optimization with deap. Default: [].
            variables_bounds_guess (dict, optional):
                Dictionary with parameter names (keys) and their bounds (values) as
                list. These bounds define the region there the minimum is expected. Used
                for the BADS optimization. Default: None.
            verbose (bool, optional):
                If True, print additional information. Default: False.
        """

        if len(self.opt_created) > 0:
            print(
                "OptNeuron: Error: Already another OptNeuron created. Only create one per python session!"
            )
            quit()
        else:
            print(
                "OptNeuron: Initialize OptNeuron... do not create anything with ANNarchy before!"
            )

            ### set object variables
            self.verbose = verbose
            self.verbose_run = False
            self.opt_created.append(1)
            self.record = record
            self.results_soll = results_soll
            self.variables_bounds = variables_bounds
            self.fitting_variables_name_list = self._get_fitting_variables_name_list()
            self.method = method
            if method == "hyperopt":
                if fv_space is None:
                    self.fv_space = self._get_hyperopt_space()
                else:
                    self.fv_space = fv_space
            self.const_params = self._get_const_params()
            self.num_rep_loss = num_rep_loss
            self.neuron_model = neuron_model
            if method == "sbi":
                self.prior = self._get_prior(prior)
            self.target_neuron = target_neuron_model
            self.compile_folder_name = compile_folder_name
            self._get_loss = get_loss_function
            self.cma_params_dict = cma_params_dict
            self.source_solutions = source_solutions
            self.variables_bounds_guess = variables_bounds_guess
            self.bads_params_dict = bads_params_dict
            self.loss_history = []
            self.start_time = time()
            self.recording_period = recording_period

            ### if using deap pop size is the number of individuals for the optimization
            if method == "deap":
                self._deap_cma = self._prepare_deap_cma()
                self.popsize = self._deap_cma.deap_dict["strategy"].lambda_
            else:
                self.popsize = 1
            if self.verbose:
                print("OptNeuron: popsize:", self.popsize)

            ### check target_neuron/results_soll
            self._check_target()
            ### check neuron models
            self._check_neuron_models()

            ### setup ANNarchy
            setup(dt=time_step)

            ### create and compile model
            ### if neuron models and target neuron model --> create both models then
            ### test, then clear and create only model for neuron model
            model, target_model, monitors = self._generate_models(self.popsize)

            self.pop = model.populations[0]
            if target_model is not None:
                self.pop_target = target_model.populations[0]
            else:
                self.pop_target = None
            ### create experiment with current monitors
            self.experiment = experiment(monitors=monitors)

            ### check variables of model
            self._test_variables()

            ### check neuron models, experiment, get_loss
            ### if results_soll is None -_> generate results_soll
            self._check_get_loss_function()

            ### after checking neuron models, experiment, get_loss
            ### clear ANNarchy and create/compile again only
            ### standard model, thus recreate also monitors and experiment
            mf.cnp_clear()
            model, _, monitors = self._generate_models(self.popsize)
            self.monitors = monitors
            self.experiment = experiment(monitors=monitors)

    def _get_lower_upper_p0(self):
        """
        Returns the lower and upper bounds and the initial values for the cma
        optimization with deap.

        Returns:
            lower (np.array):
                The lower bounds for the optimization.
            upper (np.array):
                The upper bounds for the optimization.
            p0 (np.array):
                The initial values for the optimization.
        """

        lower = np.array(
            [
                min(self.variables_bounds[name])
                for name in self.fitting_variables_name_list
            ]
        )
        upper = np.array(
            [
                max(self.variables_bounds[name])
                for name in self.fitting_variables_name_list
            ]
        )
        p0 = np.array(
            [
                np.random.uniform(
                    min(self.variables_bounds[key]),
                    max(self.variables_bounds[key]),
                )
                for key in self.fitting_variables_name_list
            ]
        )
        return lower, upper, p0

    def _get_lower_upper_x0(self):
        """
        Returns the lower and upper bounds and the initial values for the optimization
        with bads.

        Returns:
            lower (np.array):
                The lower bounds for the optimization.
            upper (np.array):
                The upper bounds for the optimization.
            x0 (np.array):
                The initial values for the optimization.
            lower_guess (np.array):
                The lower bounds for the optimization where the minimum is expected.
            upper_guess (np.array):
                The upper bounds for the optimization where the minimum is expected.
        """
        lower, upper, x0 = self._get_lower_upper_p0()

        if not isinstance(self.variables_bounds_guess, type(None)):
            lower_guess = np.array(
                [
                    min(self.variables_bounds_guess[name])
                    for name in self.fitting_variables_name_list
                ]
            )
            upper_guess = np.array(
                [
                    max(self.variables_bounds_guess[name])
                    for name in self.fitting_variables_name_list
                ]
            )
        else:
            lower_guess = deepcopy(lower)
            upper_guess = deepcopy(upper)

        return lower, upper, x0, lower_guess, upper_guess

    def _prepare_deap_cma(self):
        """
        Initializes the DeapCma class.

        Returns:
            deap_cma (DeapCma):
                The initialized DeapCma object.
        """

        LOWER, UPPER, p0 = self._get_lower_upper_p0()

        deap_cma = ef.DeapCma(
            max_evals=0,
            lower=LOWER,
            upper=UPPER,
            evaluate_function=self._deap_simulation_wrapper,
            p0=p0,
            param_names=self.fitting_variables_name_list,
            learn_rate_factor=1,
            damping_factor=1,
            verbose=False,
            plot_file=None,
            cma_params_dict=self.cma_params_dict,
            source_solutions=self.source_solutions,
        )
        return deap_cma

    class _NullContextManager:
        def __enter__(self):
            # This method is called when entering the context
            pass

        def __exit__(self, exc_type, exc_value, traceback):
            # This method is called when exiting the context
            pass

    def _generate_models(self, popsize=1):
        """
        Generates the tuned model and the target_model (only if results_soll is None).

        Args:
            popsize (int, optional):
                The number of neurons in the population(s). Default: 1.

        Returns:
            model (CompNeuroModel):
                The model which is used for the optimization.

            target_model (CompNeuroModel):
                The model which is used to generate the target data. If results_soll is
                provided, target_model is None.

            monitors (CompNeuroMonitors):
                The monitors which are used to record the data. If no variables are
                recorded, monitors is None.
        """
        with self._NullContextManager() if self.verbose else ef.suppress_stdout():
            model = None
            target_model = None
            monitors = None
            if self.results_soll is None:
                if self.verbose:
                    print(
                        "OptNeuron: Create two models (optimized and target for obtaining results_soll)"
                    )
                    print("optimized neuron model:", self.neuron_model)
                    print("target neuron model:", self.target_neuron)
                ### create two models
                model = CompNeuroModel(
                    model_creation_function=self._raw_neuron,
                    model_kwargs={
                        "neuron": self.neuron_model,
                        "name": "model_neuron",
                        "size": popsize,
                    },
                    name="standard_model",
                    do_create=True,
                    do_compile=False,
                    compile_folder_name=self.compile_folder_name,
                )

                target_model = CompNeuroModel(
                    model_creation_function=self._raw_neuron,
                    model_kwargs={
                        "neuron": self.target_neuron,
                        "name": "target_model_neuron",
                        "size": 1,
                    },
                    name="target_model",
                    do_create=True,
                    do_compile=True,
                    compile_folder_name=self.compile_folder_name,
                )

                ### create monitors
                if len(self.record) > 0:
                    recording_period_str = (
                        f";{self.recording_period}"
                        if self.recording_period is not None
                        and ("spike" not in self.record or len(self.record) > 1)
                        else ""
                    )
                    monitors = CompNeuroMonitors(
                        {
                            f"{pop_name}{recording_period_str}": self.record
                            for pop_name in [
                                model.populations[0],
                                target_model.populations[0],
                            ]
                        }
                    )

            else:
                if self.verbose:
                    print(
                        "OptNeuron: Create one model (optimized, results_soll is available)"
                    )
                    print("optimized neuron model:", self.neuron_model)
                ### create one model
                model = CompNeuroModel(
                    model_creation_function=self._raw_neuron,
                    model_kwargs={
                        "neuron": self.neuron_model,
                        "name": "model_neuron",
                        "size": popsize,
                    },
                    name="single_model",
                    do_create=True,
                    do_compile=True,
                    compile_folder_name=self.compile_folder_name,
                )
                ### create monitors
                if len(self.record) > 0:
                    recording_period_str = (
                        f";{self.recording_period}"
                        if self.recording_period is not None
                        and ("spike" not in self.record or len(self.record) > 1)
                        else ""
                    )
                    monitors = CompNeuroMonitors(
                        {f"{model.populations[0]}{recording_period_str}": self.record}
                    )

        return model, target_model, monitors

    def _check_neuron_models(self):
        """
        Checks if the neuron models are ANNarchy neuron models.
        """
        if not (isinstance(self.neuron_model, type(Neuron()))) or (
            self.target_neuron is not None
            and not (isinstance(self.target_neuron, type(Neuron())))
        ):
            print(
                "OptNeuron: Error: neuron_model and/or target_neuron_model have to be ANNarchy neuron models"
            )
            quit()

    def _check_target(self):
        """
        Check if either results_soll or target_neuron are provided and not both.
        """
        if self.target_neuron is None and self.results_soll is None:
            print(
                "OptNeuron: Error: Either provide results_soll or target_neuron_model"
            )
            quit()
        elif self.target_neuron is not None and self.results_soll is not None:
            print(
                "OptNeuron: Error: Either provide results_soll or target_neuron_model, not both"
            )
            quit()

    def _get_prior(self, prior):
        """
        Get the prior distribution used by sbi. If no prior is given, uniform
        distributions between the variable bounds are assumed. If a prior is given,
        this prior is used.

        Args:
            prior (distribution, optional):
                The prior distribution used by sbi. Default: None, i.e., uniform
                distributions between the variable bounds are assumed.

        Returns:
            prior (distribution):
                The prior distribution used by sbi.
        """
        if prior is None:
            prior_min = []
            prior_max = []
            for _, param_bounds in self.variables_bounds.items():
                if isinstance(param_bounds, list):
                    prior_min.append(param_bounds[0])
                    prior_max.append(param_bounds[1])

            return utils.BoxUniform(
                low=torch.as_tensor(prior_min), high=torch.as_tensor(prior_max)
            )
        else:
            return prior

    def _get_fitting_variables_name_list(self):
        """
        Returns a list with the names of the fitting variables.

        Returns:
            fitting_variables_name_list (list):
                list with names of fitting variables
        """
        name_list = []
        for param_name, param_bounds in self.variables_bounds.items():
            if isinstance(param_bounds, list):
                name_list.append(param_name)

        if self.verbose:
            print("OptNeuron: Fitting variables:", name_list)
        return name_list

    def _get_hyperopt_space(self):
        """
        Generates the hyperopt variable space from the fitting variable bounds. The
        variable space is a uniform distribution between the bounds.

        Returns:
            fitting_variables_space (list):
                list with hyperopt variables
        """
        fitting_variables_space = []
        for param_name, param_bounds in self.variables_bounds.items():
            if isinstance(param_bounds, list):
                fitting_variables_space.append(
                    hp.uniform(param_name, min(param_bounds), max(param_bounds))
                )
        return fitting_variables_space

    def _get_const_params(self):
        """
        Returns:
            const_params (dict):
                Dictionary with constant variables. The keys are the parameter names
                and the values are the parameter values.
        """
        const_params = {}
        for param_name, param_bounds in self.variables_bounds.items():
            if not (isinstance(param_bounds, list)):
                const_params[param_name] = param_bounds

        if self.verbose:
            print("OptNeuron: Constant parameters:", const_params)
        return const_params

    def _check_get_loss_function(self):
        """
        Checks if the get_loss_function is compatible to the experiment and the neuron
        model(s). To test, the experiment is run once with the tuned neuron model
        (generating results_ist) and once with the target neuron model (if provided,
        generating results_soll). Then, the get_loss_function is called with the
        results_ist and results_soll.
        """
        print("checking neuron_models, experiment, get_loss...", end="")

        fitparams = []
        for bounds in self.variables_bounds.values():
            if isinstance(bounds, list):
                fitparams.append(bounds[0])
        if self.verbose:
            print(
                f"fitparams selected for checking: {fitparams} ({self.fitting_variables_name_list})"
            )

        if self.results_soll is not None:
            ### only generate results_ist with standard neuron model
            results_ist = self._run_simulator_with_results(fitparams)["results"]
        else:
            ### run simulator with both populations (standard neuron model and target
            ### neuron model) and generatate results_ist and results_soll
            results_ist = self._run_simulator_with_results(fitparams)["results"]
            self.results_soll = self._run_simulator_with_results(
                fitparams, pop=self.pop_target
            )["results"]

        try:
            self._wrapper_get_loss(results_ist, self.results_soll)
        except:
            print(
                "\nThe get_loss_function, experiment and neuron model(s) are not compatible:\n"
            )
            traceback.print_exc()
            quit()
        print("Done\n")

    def _wrapper_get_loss(self, results_ist, results_soll):
        """
        Makes it possible to use the get_loss_function with multiple neurons. The
        get_loss_function should always calculate the loss for neuron rank 0!

        Args:
            results_ist (object):
                the results object returned by the run function of experiment (see above)
                it can contain recordings of multiple neurons
            results_soll (any):
                the target data directly provided to OptNeuron during initialization
                it always contains only the recordings of a single neuron

        Returns:
            all_loss_list (list):
                list of lists containing the 'all_loss_list' for each neuron
        """
        ### loop over neurons and calculate all_loss_list for each neuron
        all_loss_list = []
        for neuron_idx in range(self.popsize):
            results_ist_neuron = self._get_results_of_single_neuron(
                results_ist, neuron_idx
            )
            all_loss_list.append(self._get_loss(results_ist_neuron, results_soll))

        return all_loss_list

    def _get_results_of_single_neuron(self, results, neuron_idx):
        """
        Returns a results object which contains only the recordings of the given neuron
        index. The defined neuron will be neuron rank 0 in the returned results object.

        Args:
            results (object):
                the results object returned by the run function of experiment (see above)
                it can contain recordings of multiple neurons
            neuron_idx (int):
                index of the neuron whose recordings should be returned

        Returns:
            results_neuron (object):
                the results object as returned by the run function of experiment for a
                single neuron
        """
        ### if only one neuron, simply return results
        if self.popsize == 1:
            return results

        ### if multiple neurons, return results for single neuron, do not change
        ### original results!
        results_neuron = deepcopy(results)

        ### loop over chunks and recordings and select only the recordings of the
        ### defined neuron
        for chunk in range(len(results_neuron.recordings)):
            for rec_key in results_neuron.recordings[chunk].keys():
                ### adjust spike dictionary
                if "spike" in rec_key and not ("target" in rec_key):
                    results_neuron.recordings[chunk][rec_key] = {
                        0: results_neuron.recordings[chunk][rec_key][neuron_idx]
                    }
                ### adjust all recorded arrays
                elif not (
                    "period" in rec_key
                    or "parameter_dict" in rec_key
                    or "dt" in rec_key
                    or "target" in rec_key
                ):
                    results_neuron.recordings[chunk][rec_key] = (
                        results_neuron.recordings[chunk][rec_key][
                            :, neuron_idx
                        ].reshape(-1, 1)
                    )
                ### adjust parameter_dict
                elif "parameter_dict" in rec_key and not ("target" in rec_key):
                    results_neuron.recordings[chunk][rec_key] = {
                        parameter_dict_key: np.array(
                            [
                                results_neuron.recordings[chunk][rec_key][
                                    parameter_dict_key
                                ][neuron_idx]
                            ]
                        )
                        for parameter_dict_key in results_neuron.recordings[chunk][
                            rec_key
                        ].keys()
                    }

        return results_neuron

    def _raw_neuron(self, neuron, name, size):
        """
        Generates a population with one neuron of the given neuron model.

        Args:
            neuron (ANNarchy Neuron):
                The neuron model.
            name (str):
                The name of the population.
            size (int):
                The number of neurons in the population.
        """
        Population(size, neuron=neuron, name=name)

    def _test_variables(self):
        """
        Check if the tuned neuron model contains all parameters which are defined in
        variables_bounds or even more.
        """
        ### collect all names
        all_vars_names = np.concatenate(
            [
                np.array(list(self.const_params.keys())),
                np.array(self.fitting_variables_name_list),
            ]
        ).tolist()
        ### check if pop has these parameters
        pop_parameter_names = get_population(self.pop).attributes.copy()
        for name in pop_parameter_names.copy():
            if name in all_vars_names:
                all_vars_names.remove(name)
                pop_parameter_names.remove(name)
        if len(pop_parameter_names) > 0:
            print(
                "OptNeuron: WARNING: attributes",
                pop_parameter_names,
                "are not used/initialized.",
            )
        if len(all_vars_names) > 0:
            print(
                "OptNeuron: WARNING: The neuron_model does not contain parameters",
                all_vars_names,
                "!",
            )

    def _run_simulator(self, fitparams):
        """
        Runs the function simulator with the multiprocessing manager (if function is
        called multiple times this saves memory, otherwise same as calling simulator
        directly).

        Args:
            fitparams (list):
                list with values for fitting parameters or list of lists with values
                for fitting parameters (first dimension is the number of parameters,
                second dimension is the number of neurons)

        Returns:
            return_dict (dict):
                dictionary needed for optimization with hyperopt, containing the loss,
                the loss variance (in case of noisy models with multiple runs per loss
                calculation), and the status (STATUS_OK for hyperopt).
        """

        ### initialize manager and generate m_list = dictionary to save data
        manager = multiprocessing.Manager()
        m_list = manager.dict()

        ### in case of noisy models, here optionally run multiple simulations, to mean the loss
        loss_list_over_runs = []

        return_results = False
        for _ in range(self.num_rep_loss):
            ### initialize for each run a new rng (--> not always have same noise in case of noisy models/simulations)
            rng = np.random.default_rng()
            ### run simulator with multiprocessign manager
            proc = Process(
                target=self._simulator, args=(fitparams, rng, m_list, return_results)
            )
            proc.start()
            proc.join()
            ### get simulation results/loss (list of losses for each neuron)
            loss_list_over_runs.append(m_list[0])

        ### create loss array, first dimension is the number of runs, second dimension
        ### is the number of neurons
        loss_arr = np.array(loss_list_over_runs)

        ### calculate mean and std of loss over runs
        if self.num_rep_loss > 1:
            ### multiple runs, mean over runs
            ### -> resulting in 1D arrays for neurons
            loss_ret_arr = np.mean(loss_arr, 0)
            std_ret_arr = np.std(loss_arr, 0)
        else:
            ### just take the first entry (the only one)
            ### -> resulting in 1D arrays for neurons
            loss_ret_arr = loss_arr[0]
            std_ret_arr = np.array([None] * self.popsize)

        ### if only one neuron, return loss and std as single values
        if self.popsize == 1:
            loss = loss_ret_arr[0]
            std = std_ret_arr[0]
        else:
            loss = loss_ret_arr
            std = std_ret_arr

        ### append best loss and time since start to loss_history
        self.loss_history.append([af.get_minimum(loss), time() - self.start_time])

        ### return loss and other things for optimization, if multiple neurons
        ### --> loss and std are arrays with loss/std for each neuron
        if self.num_rep_loss > 1:
            return {"status": STATUS_OK, "loss": loss, "loss_variance": std}
        else:
            return {"status": STATUS_OK, "loss": loss}

    def _sbi_simulation_wrapper(self, fitparams):
        """
        This function is called by sbi. It calls the simulator function and
        returns the loss and adjusts the format of the input parameters.

        Args:
            fitparams (tensor):
                either a batch of parameters (tensor with two dimensions) or a single
                parameter set

        Returns:
            loss (tensor):
                loss as tensor for sbi inference
        """
        fitparams = np.asarray(fitparams)
        if len(fitparams.shape) == 2:
            ### batch parameters!
            data = []
            ### TODO the run_simulator_function can now handle multiple parameter sets
            ### and directly can return the loss for each parameter set, but the model
            ### has to have the corrects size, i.e., the number of neurons has to be
            ### the same as the number of parameter sets, maybe adjust sbi to this
            for idx in range(fitparams.shape[0]):
                data.append(self._run_simulator(fitparams[idx])["loss"])
        else:
            ### single parameter set!
            data = [self._run_simulator(fitparams)["loss"]]

        return torch.as_tensor(data)

    def _deap_simulation_wrapper(self, population: list):
        """
        This function is called by deap. It calls the simulator function and
        returns the loss and adjusts the format of the input parameters.

        Args:
            population (list):
                list of lists with values for fitting parameters (first dimension is
                the number of neurons, second dimension is the number of parameters)
                given by deap
        """
        ### transpose population list (now first dimension is the number of parameters,)
        populationT = np.array(population).T.tolist()
        ### get loss list
        loss_list = self._run_simulator(populationT)["loss"]
        ### return loss list as list of tuples (deap needs this format)
        return [(loss_list[neuron_idx],) for neuron_idx in range(len(population))]

    def _bads_simulation_wrapper(self, fitparams: list):
        """
        This function is called by bads. It calls the simulator function and
        returns the loss.
        """
        return self._run_simulator(fitparams)["loss"]

    def _run_simulator_with_results(self, fitparams, pop=None):
        """
        Runs the function simulator with the multiprocessing manager (if function is
        called multiple times this saves memory, otherwise same as calling simulator
        directly) and also returns the results.

        Args:
            fitparams (list):
                list with values for fitting parameters or list of lists with values
                for fitting parameters (first dimension is the number of parameters,
                second dimension is the number of neurons)

            pop (str, optional):
                ANNarchy population name. Default: None, i.e., the tuned population
                is used.

        Returns:
            return_dict (dict):
                dictionary needed for optimization with hyperopt, containing the loss,
                the loss variance (in case of noisy models with multiple runs per loss
                calculation), and the status (STATUS_OK for hyperopt) and the results
                generated by the experiment.
        """
        ### check if pop is given
        if pop is None:
            pop = self.pop
        ### initialize manager and generate m_list = dictionary to save data
        manager = multiprocessing.Manager()
        m_list = manager.dict()

        ### in case of noisy models, here optionally run multiple simulations, to mean
        ### the loss
        loss_list_over_runs = []
        all_loss_list_over_runs = []
        return_results = True
        if self.verbose:
            print(f"OptNeuron: run simulator with results {self.num_rep_loss} times")
        for _ in range(self.num_rep_loss):
            ### initialize for each run a new rng (--> not always have same noise in
            ### case of noisy models/simulations)
            rng = np.random.default_rng()
            ### run simulator with multiprocessign manager
            proc = Process(
                target=self._simulator,
                args=(fitparams, rng, m_list, return_results, pop),
            )
            proc.start()
            proc.join()
            ### get simulation results/loss
            ### list of losses for each neuron
            loss_list_over_runs.append(m_list[0])
            ### results object of experiment
            results_ist = m_list[1]
            ### list of the all_loss_list for each neuron
            all_loss_list_over_runs.append(m_list[2])

        ### create loss array, first dimension is the number of runs, second dimension
        ### is the number of neurons
        loss_arr = np.array(loss_list_over_runs)
        ### create all_loss array, first dimension is the number of runs, second
        ### dimension is the number of neurons, third dimension is the number of
        ### individual losses
        all_loss_arr = np.array(all_loss_list_over_runs)

        ### calculate mean and std of loss over runs
        if self.num_rep_loss > 1:
            ### resulting in 1D arrays for neurons
            loss = np.mean(loss_arr, 0)
            std = np.std(loss_arr)
            ### resulting in 2D array for neurons (1st dim) and individual losses (2nd dim)
            all_loss = np.mean(all_loss_arr, 0)
        else:
            ### just take the first entry (the only one)
            ### resulting in 1D arrays for neurons
            loss = loss_arr[0]
            std = np.array([None] * self.popsize)
            ### resulting in 2D array for neurons (1st dim) and individual losses (2nd dim)
            all_loss = all_loss_arr[0]

        ### if only one neuron, return loss and std as single values and all_loss as
        ### single 1D array (length is the number of individual losses)
        if self.popsize == 1:
            loss = loss[0]
            std = std[0]
            all_loss = all_loss[0]
        else:
            loss = loss
            std = std
            all_loss = all_loss

        ### return loss and other things for optimization and results
        if self.num_rep_loss > 1:
            return {
                "status": STATUS_OK,
                "loss": loss,
                "loss_variance": std,
                "std": std,
                "all_loss": all_loss,
                "results": results_ist,
            }
        else:
            return {
                "status": STATUS_OK,
                "loss": loss,
                "std": std,
                "all_loss": all_loss,
                "results": results_ist,
            }

    def _simulator(
        self, fitparams, rng, m_list=[0, 0, 0], return_results=False, pop=None
    ):
        """
        Runs the experiment with the given parameters and 'returns' the loss and
        optionally the results and all individual losses of the get_loss_function. The
        'returned' values are saved in m_list.

        Args:
            fitparams (list):
                list with values for fitting parameters or list of lists with values
                for fitting parameters (first dimension is the number of parameters,
                second dimension is the number of neurons)

            rng (numpy random generator):
                random generator for the simulation

            m_list (list, optional):
                list with the loss, the results, and the all_loss. Default: [0, 0, 0].

            return_results (bool, optional):
                If True, the results are returned. Default: False.

            pop (str, optional):
                ANNarchy population name. Default: None, i.e., the tuned population
                is used.
        """
        ### TODO use rng here and add it to CompNeuroExp
        ### check if pop is given
        if pop is None:
            pop = self.pop

        ### set parameters which should not be optimized and parameters which should be
        ### optimized before the experiment, they should not be resetted by the
        ### experiment!
        self._set_fitting_parameters(fitparams, pop=pop)
        if self.verbose_run:
            param_dict = {
                param_name: (
                    fitparams[param_name_idx]
                    if isinstance(fitparams[param_name_idx], list)
                    else [fitparams[param_name_idx]]
                )
                for param_name_idx, param_name in enumerate(
                    self.fitting_variables_name_list
                )
            }
            print("OptNeuron: run simulator with parameters:")
            ef.print_df(pd.DataFrame(param_dict))

        ### conduct loaded experiment
        self.experiment.store_model_state(compartment_list=[pop])
        results = self.experiment.run(pop)
        self.experiment.reset()

        if self.results_soll is not None:
            ### compute loss_list, loss for each neuron
            loss_list = []
            ### wrapper_get_loss returns list (neurons) of lists (individual losses)
            all_loss_list = self._wrapper_get_loss(results, self.results_soll)
            ### loop over neurons
            for all_loss in all_loss_list:
                ### if all_loss is list, sum up individual losses
                if isinstance(all_loss, list) or isinstance(
                    all_loss, type(np.zeros(1))
                ):
                    loss_list.append(sum(all_loss))
                ### if all_loss is single value, just append to loss_list
                else:
                    loss_list.append(all_loss)
        else:
            all_loss_list = [999] * self.popsize
            loss_list = [999] * self.popsize

        ### "return" loss and other optional things
        m_list[0] = loss_list
        if return_results:
            m_list[1] = results
            m_list[2] = all_loss_list

    def _set_fitting_parameters(
        self,
        fitparams,
        pop,
    ):
        """
        Sets all given parameters for the population pop.

        Args:
            fitparams (list):
                list with values for fitting parameters, either a single list or a list
                of lists (first dimension is the number of parameters, second dimension
                is the number of neurons)
            pop (str, optional):
                ANNarchy population name. Default: None, i.e., the tuned population
                is used.
        """
        ### only set parameters of the fitted neuron model
        if pop != self.pop:
            return

        ### get all variables dict (combine fitting variables and const variables)
        all_variables_dict = self.const_params.copy()
        if self.verbose:
            print("OptNeuron: set fitting parameters:")
            print(f"  fitparams: {fitparams} ({self.fitting_variables_name_list})")
            print(f"  starting with const: {all_variables_dict}")

        ### multiply const params for number of neurons
        for const_param_key, const_param_val in all_variables_dict.items():
            if not (isinstance(const_param_val, str)):
                all_variables_dict[const_param_key] = [
                    all_variables_dict[const_param_key]
                ] * self.popsize
        if self.verbose:
            print(f"  adjusting for pop size: {all_variables_dict}")

        ### add fitting variables
        for fitting_variable_idx, fitting_variable_name in enumerate(
            self.fitting_variables_name_list
        ):
            if not (isinstance(fitparams[fitting_variable_idx], list)):
                add_params = [fitparams[fitting_variable_idx]] * self.popsize
            else:
                add_params = fitparams[fitting_variable_idx]
            all_variables_dict[fitting_variable_name] = add_params
        if self.verbose:
            print(f"  add fitting variables: {all_variables_dict}")

        ### evaluate variables defined by a str
        for key, val in all_variables_dict.items():
            if isinstance(val, str):
                all_variables_dict[key] = [
                    ef.evaluate_expression_with_dict(
                        val,
                        {
                            all_variables_key: all_variables_dict[all_variables_key][
                                neuron_idx
                            ]
                            for all_variables_key in all_variables_dict.keys()
                            if not (
                                isinstance(all_variables_dict[all_variables_key], str)
                            )
                        },
                    )
                    for neuron_idx in range(self.popsize)
                ]
        if self.verbose:
            print(f"  add str variables: {all_variables_dict}")

        ### set parameters
        for param_name, param_val in all_variables_dict.items():
            pop_parameter_names = get_population(pop).attributes
            ### only if param_name in parameter attributes
            if param_name in pop_parameter_names:
                if self.popsize == 1:
                    setattr(get_population(pop), param_name, param_val[0])
                else:
                    setattr(get_population(pop), param_name, param_val)

    def _test_fit(self, fitparams_dict):
        """
        Runs the experiment with the optimized parameters obtained with hyperopt and
        returns the loss, the results and all individual losses of the
        get_loss_function.

        Args:
            fitparams_dict (dict):
                dictionary with parameter names (keys) and their values (values)

        Returns:
            fit (dict):
                dictionary containing the loss, the loss variance (in case of noisy
                models with multiple runs per loss calculation), and the status
                (STATUS_OK for hyperopt) and the results generated by the experiment.
        """
        results = self._run_simulator_with_results(
            [fitparams_dict[name] for name in self.fitting_variables_name_list]
        )
        ### if self.popsize > 1 --> transform results, loss etc. to only 1 neuron
        if self.popsize > 1:
            results["loss"] = results["loss"][0]
            results["std"] = results["std"][0]
            results["all_loss"] = results["all_loss"][0]
            results["results"] = self._get_results_of_single_neuron(
                results["results"], 0
            )
        return results

    def _run_with_sbi(self, max_evals, sbi_plot_file):
        """
        Runs the optimization with sbi.

        Args:
            max_evals (int):
                number of runs the optimization method performs

            sbi_plot_file (str):
                If you use "sbi": the name of the figure which will be saved and shows
                the posterior.

        Returns:
            best (dict):
                dictionary containing the optimized parameters and the posterior.
        """
        ### get prior bounds
        prior_min = []
        prior_max = []
        for _, param_bounds in self.variables_bounds.items():
            if isinstance(param_bounds, list):
                prior_min.append(param_bounds[0])
                prior_max.append(param_bounds[1])

        ### run sbi
        simulator, prior = prepare_for_sbi(
            self._sbi_simulation_wrapper,
            self.prior,
            {
                "lower_bound": torch.as_tensor(prior_min),
                "upper_bound": torch.as_tensor(prior_max),
            },
        )
        inference = SNPE(prior, density_estimator="mdn")
        theta, x = simulate_for_sbi(
            simulator=simulator,
            proposal=prior,
            num_simulations=max_evals,
            num_workers=1,
        )
        density_estimator = inference.append_simulations(theta, x).train()
        posterior = inference.build_posterior(density_estimator)
        x_o = torch.as_tensor([0])  # data which should be obtained: loss==0
        posterior = posterior.set_default_x(x_o)

        ### get best params
        posterior_samples = posterior.sample(
            (10000,)
        )  # posterior = distribution P(params|data) --> set data and then sample possible parameters
        best_params = posterior_samples[
            torch.argmax(posterior.log_prob(posterior_samples))
        ].numpy()  # sampled parameters with highest prob in posterior

        ### create best dict with best parameters
        best = {}
        for param_idx, param_name in enumerate(self.fitting_variables_name_list):
            best[param_name] = best_params[param_idx]

        ### also return posterior
        best["posterior"] = posterior

        ### plot posterior
        plot_limits = [
            [prior_min[idx], prior_max[idx]] for idx in range(len(prior_max))
        ]
        analysis.pairplot(
            posterior_samples,
            limits=plot_limits,
            ticks=plot_limits,
            fig_size=(5, 5),
            labels=self.fitting_variables_name_list,
        )

        ### save plot
        sf.create_dir("/".join(sbi_plot_file.split("/")[:-1]))
        plt.savefig(sbi_plot_file, dpi=300)

        return best

    def _run_with_bads(self, max_evals):
        """
        TODO
        """

        ### prepare bads
        target = self._bads_simulation_wrapper
        lower, upper, x0, lower_guess, upper_guess = self._get_lower_upper_x0()

        ### TODO bads can handle noisy functions, one can retunr two values, loss and std
        self.bads_params_dict["uncertainty_handling"] = False
        self.bads_params_dict["max_fun_evals"] = max_evals

        ### run bads
        bads = BADS(
            fun=target,
            x0=x0,
            lower_bounds=lower,
            upper_bounds=upper,
            plausible_lower_bounds=lower_guess,
            plausible_upper_bounds=upper_guess,
            options=self.bads_params_dict,
        )
        optimize_result = bads.optimize()

        ### create best dict with best parameters
        best = {
            fitting_variable_name: optimize_result["x"][idx]
            for idx, fitting_variable_name in enumerate(
                self.fitting_variables_name_list
            )
        }
        return best

    @check_types()
    def run(
        self,
        max_evals: int,
        results_file_name: str = "opt_neuron_results/best",
        sbi_plot_file: str = "opt_neuron_plots/posterior.png",
        deap_plot_file: str = "opt_neuron_plots/logbook.png",
        verbose: bool = False,
    ):
        """
        Runs the optimization.

        Args:
            max_evals (int):
                number of runs the optimization method performs
            results_file_name (str, optional):
                name of the file which is saved. The file contains the optimized and
                target results, the obtained parameters, the loss, and the SD of the
                loss (in case of noisy models with multiple runs per loss calculation)
                Default: "best".
            sbi_plot_file (str, optional):
                If you use "sbi": the name of the figure which will be saved and shows
                the posterior. Default: "posterior.png".
            deap_plot_file (str, optional):
                If you use "deap": the name of the figure which will be saved and shows
                the logbook. Default: "logbook.png".
            verbose (bool, optional):
                If True, detailed information is printed. Default: False.

        Returns:
            best (dict):
                dictionary containing the optimized parameters (as keys) and:

                - "loss": the loss
                - "all_loss": the individual losses of the get_loss_function
                - "std": the SD of the loss (in case of noisy models with multiple
                    runs per loss calculation)
                - "results": the results generated by the experiment
                - "results_soll": the target results
        """
        self.verbose = False
        self.verbose_run = verbose
        self.loss_history = []
        self.start_time = time()
        if self.method == "hyperopt":
            ### run optimization with hyperopt and return best dict
            best = fmin(
                fn=self._run_simulator,
                space=self.fv_space,
                algo=tpe.suggest,
                max_evals=max_evals,
            )
        elif self.method == "sbi":
            ### run optimization with sbi and return best dict
            best = self._run_with_sbi(max_evals, sbi_plot_file)
        elif self.method == "deap":
            best = self._run_with_deap(max_evals, deap_plot_file)
        elif self.method == "bads":
            if max_evals < 4:
                raise ValueError("bads needs at least 4 evaluations")
            best = self._run_with_bads(max_evals)
        else:
            print("ERROR run; method should be 'hyperopt', 'sbi', 'deap', or 'bads'")
            quit()
        ### obtain loss for the best parameters
        fit = self._test_fit(best)
        best["loss"] = float(fit["loss"])
        best["all_loss"] = fit["all_loss"]
        best["std"] = fit["std"]
        best["results"] = fit["results"]
        best["results_soll"] = self.results_soll
        self.results = best

        ### create loss history array
        self.loss_history = np.array(self.loss_history)

        ### SAVE OPTIMIZED PARAMS AND LOSS
        ### save as pkl file
        sf.save_variables(
            [best],
            [results_file_name.split("/")[-1]],
            (
                "/".join(results_file_name.split("/")[:-1])
                if len(results_file_name.split("/")) > 1
                else "./"
            ),
        )
        ### save human readable as json file
        json.dump(
            {key: best[key] for key in self.fitting_variables_name_list + ["loss"]},
            open(
                f"{results_file_name}.json",
                "w",
            ),
            indent=4,
        )

        return best

    def _run_with_deap(self, max_evals, deap_plot_file):
        """
        Runs the optimization with deap.

        Args:
            max_evals (int):
                number of runs (here generations) the optimization method performs

            deap_plot_file (str):
                the name of the figure which will be saved and shows the logbook
        """

        return self._deap_cma.run(
            max_evals=max_evals,
            plot_file=deap_plot_file,
        )

__init__(experiment, get_loss_function, variables_bounds, neuron_model, results_soll=None, target_neuron_model=None, time_step=1.0, recording_period=None, compile_folder_name='annarchy_OptNeuron', num_rep_loss=1, method='deap', prior=None, fv_space=None, record=[], cma_params_dict={}, bads_params_dict={}, source_solutions=[], variables_bounds_guess=None, verbose=False) #

This prepares the optimization. To run the optimization call the run function.

Parameters:

Name Type Description Default
experiment CompNeuroExp class

CompNeuroExp class containing a 'run' function which defines the simulations and recordings

required
get_loss_function function

function which takes results_ist and results_soll as arguments and calculates/returns the loss

required
variables_bounds dict

Dictionary with parameter names (keys) and their bounds (values). If single values are given as values, the parameter is constant, i.e., not optimized. If a list is given as value, the parameter is optimized and the list contains the lower and upper bound of the parameter (order is not important). If strings instead of numbers are given, the string is interpreted as an mathematical expression which is evaluated with the other parameter values (i.e. {"x":[0,1],"dxy":[-1,1],"y":"x+dxy","z":5}).

required
neuron_model ANNarchy Neuron

The neuron model whose parameters should be optimized.

required
results_soll Any

Some variable which contains the target data and can be used by the get_loss_function (second argument of get_loss_function)

Warning

Either provide results_soll or a target_neuron_model not both!

Default: None.

None
target_neuron_model ANNarchy Neuron

The neuron model which produces the target data by running the experiment.

Warning

Either provide results_soll or a target_neuron_model not both!

Default: None.

None
time_step float

The time step for the simulation in ms. Default: 1.

1.0
recording_period float

The recording period for the simulation in ms. Default: None, i.e., the time_step is used.

None
compile_folder_name string

The name of the annarchy compilation folder within annarchy_folders/. Default: 'annarchy_OptNeuron'.

'annarchy_OptNeuron'
num_rep_loss int

Only interesting for noisy simulations/models. How often should the simulaiton be run to calculate the loss (the defined number of losses is obtained and averaged). Default: 1.

1
method str

Either 'deap', 'sbi', or 'hyperopt'. Defines the tool which is used for optimization. Default: 'deap'.

'deap'
prior distribution

The prior distribution used by sbi. Default: None, i.e., uniform distributions between the variable bounds are assumed.

None
fv_space list

The search space for hyperopt. Default: None, i.e., uniform distributions between the variable bounds are assumed.

None
record list

List of strings which define what variables of the tuned neuron should be recorded. Default: [].

[]
cma_params_dict dict

Dictionary with parameters for the deap.cma.Strategy. Default: {}. See here for more information.

{}
bads_params_dict dict

Dictionary with parameters for the bads optimization. Default: {}. See here for more information.

{}
source_solutions list

List of tuples with the source solutions. Each tuple contains a numpy array with the parameter values and the loss. Used for initialization of cma optimization with deap. Default: [].

[]
variables_bounds_guess dict

Dictionary with parameter names (keys) and their bounds (values) as list. These bounds define the region there the minimum is expected. Used for the BADS optimization. Default: None.

None
verbose bool

If True, print additional information. Default: False.

False
Source code in src/CompNeuroPy/opt_neuron.py
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
@check_types(warnings=False)
def __init__(
    self,
    experiment: Type[CompNeuroExp],
    get_loss_function: Callable[[Any, Any], float | list[float]],
    variables_bounds: dict[str, float | str | list[float | str]],
    neuron_model: Neuron,
    results_soll: Any | None = None,
    target_neuron_model: Neuron | None = None,
    time_step: float = 1.0,
    recording_period: float | None = None,
    compile_folder_name: str = "annarchy_OptNeuron",
    num_rep_loss: int = 1,
    method: str = "deap",
    prior=None,
    fv_space: list = None,
    record: list[str] = [],
    cma_params_dict: dict = {},
    bads_params_dict: dict = {},
    source_solutions: list[tuple[np.ndarray, float]] = [],
    variables_bounds_guess: None | dict[str, list[float]] = None,
    verbose=False,
):
    """
    This prepares the optimization. To run the optimization call the run function.

    Args:
        experiment (CompNeuroExp class):
            CompNeuroExp class containing a 'run' function which defines the
            simulations and recordings
        get_loss_function (function):
            function which takes results_ist and results_soll as arguments and
            calculates/returns the loss
        variables_bounds (dict):
            Dictionary with parameter names (keys) and their bounds (values). If
            single values are given as values, the parameter is constant, i.e., not
            optimized. If a list is given as value, the parameter is optimized and
            the list contains the lower and upper bound of the parameter (order is
            not important). If strings instead of numbers are given, the string is
            interpreted as an mathematical expression which is evaluated with the
            other parameter values (i.e. {"x":[0,1],"dxy":[-1,1],"y":"x+dxy","z":5}).
        neuron_model (ANNarchy Neuron):
            The neuron model whose parameters should be optimized.
        results_soll (Any, optional):
            Some variable which contains the target data and can be used by the
            get_loss_function (second argument of get_loss_function)
            !!! warning
                Either provide results_soll or a target_neuron_model not both!
            Default: None.
        target_neuron_model (ANNarchy Neuron, optional):
            The neuron model which produces the target data by running the
            experiment.
            !!! warning
                Either provide results_soll or a target_neuron_model not both!
            Default: None.
        time_step (float, optional):
            The time step for the simulation in ms. Default: 1.
        recording_period (float, optional):
            The recording period for the simulation in ms. Default: None, i.e., the
            time_step is used.
        compile_folder_name (string, optional):
            The name of the annarchy compilation folder within annarchy_folders/.
            Default: 'annarchy_OptNeuron'.
        num_rep_loss (int, optional):
            Only interesting for noisy simulations/models. How often should the
            simulaiton be run to calculate the loss (the defined number of losses
            is obtained and averaged). Default: 1.
        method (str, optional):
            Either 'deap', 'sbi', or 'hyperopt'. Defines the tool which is used for
            optimization. Default: 'deap'.
        prior (distribution, optional):
            The prior distribution used by sbi. Default: None, i.e., uniform
            distributions between the variable bounds are assumed.
        fv_space (list, optional):
            The search space for hyperopt. Default: None, i.e., uniform
            distributions between the variable bounds are assumed.
        record (list, optional):
            List of strings which define what variables of the tuned neuron should
            be recorded. Default: [].
        cma_params_dict (dict, optional):
            Dictionary with parameters for the deap.cma.Strategy. Default: {}.
            See [here](https://deap.readthedocs.io/en/master/api/algo.html#deap.cma.Strategy) for more information.
        bads_params_dict (dict, optional):
            Dictionary with parameters for the bads optimization. Default: {}.
            See [here](https://acerbilab.github.io/pybads/api/options/bads_options.html) for more information.
        source_solutions (list, optional):
            List of tuples with the source solutions. Each tuple contains a numpy
            array with the parameter values and the loss. Used for initialization of
            cma optimization with deap. Default: [].
        variables_bounds_guess (dict, optional):
            Dictionary with parameter names (keys) and their bounds (values) as
            list. These bounds define the region there the minimum is expected. Used
            for the BADS optimization. Default: None.
        verbose (bool, optional):
            If True, print additional information. Default: False.
    """

    if len(self.opt_created) > 0:
        print(
            "OptNeuron: Error: Already another OptNeuron created. Only create one per python session!"
        )
        quit()
    else:
        print(
            "OptNeuron: Initialize OptNeuron... do not create anything with ANNarchy before!"
        )

        ### set object variables
        self.verbose = verbose
        self.verbose_run = False
        self.opt_created.append(1)
        self.record = record
        self.results_soll = results_soll
        self.variables_bounds = variables_bounds
        self.fitting_variables_name_list = self._get_fitting_variables_name_list()
        self.method = method
        if method == "hyperopt":
            if fv_space is None:
                self.fv_space = self._get_hyperopt_space()
            else:
                self.fv_space = fv_space
        self.const_params = self._get_const_params()
        self.num_rep_loss = num_rep_loss
        self.neuron_model = neuron_model
        if method == "sbi":
            self.prior = self._get_prior(prior)
        self.target_neuron = target_neuron_model
        self.compile_folder_name = compile_folder_name
        self._get_loss = get_loss_function
        self.cma_params_dict = cma_params_dict
        self.source_solutions = source_solutions
        self.variables_bounds_guess = variables_bounds_guess
        self.bads_params_dict = bads_params_dict
        self.loss_history = []
        self.start_time = time()
        self.recording_period = recording_period

        ### if using deap pop size is the number of individuals for the optimization
        if method == "deap":
            self._deap_cma = self._prepare_deap_cma()
            self.popsize = self._deap_cma.deap_dict["strategy"].lambda_
        else:
            self.popsize = 1
        if self.verbose:
            print("OptNeuron: popsize:", self.popsize)

        ### check target_neuron/results_soll
        self._check_target()
        ### check neuron models
        self._check_neuron_models()

        ### setup ANNarchy
        setup(dt=time_step)

        ### create and compile model
        ### if neuron models and target neuron model --> create both models then
        ### test, then clear and create only model for neuron model
        model, target_model, monitors = self._generate_models(self.popsize)

        self.pop = model.populations[0]
        if target_model is not None:
            self.pop_target = target_model.populations[0]
        else:
            self.pop_target = None
        ### create experiment with current monitors
        self.experiment = experiment(monitors=monitors)

        ### check variables of model
        self._test_variables()

        ### check neuron models, experiment, get_loss
        ### if results_soll is None -_> generate results_soll
        self._check_get_loss_function()

        ### after checking neuron models, experiment, get_loss
        ### clear ANNarchy and create/compile again only
        ### standard model, thus recreate also monitors and experiment
        mf.cnp_clear()
        model, _, monitors = self._generate_models(self.popsize)
        self.monitors = monitors
        self.experiment = experiment(monitors=monitors)

run(max_evals, results_file_name='opt_neuron_results/best', sbi_plot_file='opt_neuron_plots/posterior.png', deap_plot_file='opt_neuron_plots/logbook.png', verbose=False) #

Runs the optimization.

Parameters:

Name Type Description Default
max_evals int

number of runs the optimization method performs

required
results_file_name str

name of the file which is saved. The file contains the optimized and target results, the obtained parameters, the loss, and the SD of the loss (in case of noisy models with multiple runs per loss calculation) Default: "best".

'opt_neuron_results/best'
sbi_plot_file str

If you use "sbi": the name of the figure which will be saved and shows the posterior. Default: "posterior.png".

'opt_neuron_plots/posterior.png'
deap_plot_file str

If you use "deap": the name of the figure which will be saved and shows the logbook. Default: "logbook.png".

'opt_neuron_plots/logbook.png'
verbose bool

If True, detailed information is printed. Default: False.

False

Returns:

Name Type Description
best dict

dictionary containing the optimized parameters (as keys) and:

  • "loss": the loss
  • "all_loss": the individual losses of the get_loss_function
  • "std": the SD of the loss (in case of noisy models with multiple runs per loss calculation)
  • "results": the results generated by the experiment
  • "results_soll": the target results
Source code in src/CompNeuroPy/opt_neuron.py
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
@check_types()
def run(
    self,
    max_evals: int,
    results_file_name: str = "opt_neuron_results/best",
    sbi_plot_file: str = "opt_neuron_plots/posterior.png",
    deap_plot_file: str = "opt_neuron_plots/logbook.png",
    verbose: bool = False,
):
    """
    Runs the optimization.

    Args:
        max_evals (int):
            number of runs the optimization method performs
        results_file_name (str, optional):
            name of the file which is saved. The file contains the optimized and
            target results, the obtained parameters, the loss, and the SD of the
            loss (in case of noisy models with multiple runs per loss calculation)
            Default: "best".
        sbi_plot_file (str, optional):
            If you use "sbi": the name of the figure which will be saved and shows
            the posterior. Default: "posterior.png".
        deap_plot_file (str, optional):
            If you use "deap": the name of the figure which will be saved and shows
            the logbook. Default: "logbook.png".
        verbose (bool, optional):
            If True, detailed information is printed. Default: False.

    Returns:
        best (dict):
            dictionary containing the optimized parameters (as keys) and:

            - "loss": the loss
            - "all_loss": the individual losses of the get_loss_function
            - "std": the SD of the loss (in case of noisy models with multiple
                runs per loss calculation)
            - "results": the results generated by the experiment
            - "results_soll": the target results
    """
    self.verbose = False
    self.verbose_run = verbose
    self.loss_history = []
    self.start_time = time()
    if self.method == "hyperopt":
        ### run optimization with hyperopt and return best dict
        best = fmin(
            fn=self._run_simulator,
            space=self.fv_space,
            algo=tpe.suggest,
            max_evals=max_evals,
        )
    elif self.method == "sbi":
        ### run optimization with sbi and return best dict
        best = self._run_with_sbi(max_evals, sbi_plot_file)
    elif self.method == "deap":
        best = self._run_with_deap(max_evals, deap_plot_file)
    elif self.method == "bads":
        if max_evals < 4:
            raise ValueError("bads needs at least 4 evaluations")
        best = self._run_with_bads(max_evals)
    else:
        print("ERROR run; method should be 'hyperopt', 'sbi', 'deap', or 'bads'")
        quit()
    ### obtain loss for the best parameters
    fit = self._test_fit(best)
    best["loss"] = float(fit["loss"])
    best["all_loss"] = fit["all_loss"]
    best["std"] = fit["std"]
    best["results"] = fit["results"]
    best["results_soll"] = self.results_soll
    self.results = best

    ### create loss history array
    self.loss_history = np.array(self.loss_history)

    ### SAVE OPTIMIZED PARAMS AND LOSS
    ### save as pkl file
    sf.save_variables(
        [best],
        [results_file_name.split("/")[-1]],
        (
            "/".join(results_file_name.split("/")[:-1])
            if len(results_file_name.split("/")) > 1
            else "./"
        ),
    )
    ### save human readable as json file
    json.dump(
        {key: best[key] for key in self.fitting_variables_name_list + ["loss"]},
        open(
            f"{results_file_name}.json",
            "w",
        ),
        indent=4,
    )

    return best