(C) PLOS One This story was originally published by PLOS One and is unaltered. . . . . . . . . . . Low and high beta rhythms have different motor cortical sources and distinct roles in movement control and spatiotemporal attention [1] ['Simon Nougaret', 'Institut De Neurosciences De La Timone', 'Int', 'Umr', 'Aix-Marseille Université', 'Cnrs', 'Marseille', 'Laura López-Galdo', 'Emile Caytan', 'Julien Poitreau'] Date: 2024-07 Behavioral event codes (TTL, 8 bits) were transmitted online to the DAQ system from the VCortex software (version 2.2 running on Win XP; NIMH, http://dally.nimh.nih.gov ), which was used to control the behavioral task. A custom rebuild of the VCortex software allowed simultaneous online monitoring of hand and eye gaze positions in the common reference frame of the animal’s visual monitor display. Continuous hand position (X and Y) was obtained from 2 perpendicularly superimposed contactless linear position magnetorestrictive transducers (model MK4 A; GEFRAN, Provaglio d’Iseo, Italy). The “floating” magnetic cursor was attached to a manipulandum that could be moved along 2 pairs of rails with ball bearings, each pair aligned with one of the 2 transducers. The Y-oriented rails were fixed on top of the X-oriented rails. As such, this system provided somewhat less frictional resistance in the Y direction than in the X direction. Furthermore, either of the uni-directional X or Y displacements provided somewhat less frictional resistance than their combination needed to move to the diagonally placed targets. Hand position was used online to control the behavioral task. The hand position was also saved by VCortex for offline analysis (at 250 Hz sampling rate). In a majority of sessions, eye gaze position (X and Y) was recorded by the DAQ system (video based infrared eye-tracking; RK-716PCI (PAL version) at 50 Hz for the first single-tip electrode recordings in monkey T, or ETL-200 at 240 Hz sampling rate for the array recordings in monkey T and all recordings in monkey M; ISCAN Inc., Woburn, Massachusetts, USA). The eye-tracking camera was positioned next to the lower right corner of the monkey’s computer monitor. All linear array recordings in monkey T, and all recordings (single electrodes and linear arrays) in monkey M, were obtained on a recording platform with components commercialized by Blackrock Neurotech (Salt Lake City, Utah, USA). This system included Cereplex M digital head-stages (versions PN 6956, PN 9360, and PN 10129) connected to a Digital Hub (versions PN 6973, PN 6973 DEV 16–021, and PN 10480) via custom HDMI cables (versions PN 8083 and PN 8068), which transmitted signals via fiber optics to a 128 channel neural signal processor (NSP hardware version 1.0), and control software Cerebus Central Suite (v6.03 and v6.05 for monkeys T and M, respectively; running on Windows 7). An adapter (PN 9038) permitted connecting multiple single-tip electrodes to the Cereplex M Omnetics connector (Monkey M). Neuronal signals were hardware filtered (0.3 Hz to 7.5 kHz) and digitized and saved for offline analysis at a sampling rate of 30 kHz. We used 2 different data acquisition (DAQ) systems to record neuronal and behavioral data. All single-tip electrode recordings in monkey T were obtained on a recording platform with components commercialized by Alpha Omega. This system included the Alpha-Map system for online monitoring of signals (running on Windows XP), and the MCP-Plus multi-channel signal processor including analog head-stages. Neuronal signals from each electrode were amplified with a gain of 5,000 to 10,000 (with unit-gain head-stage), hardware filtered (1 Hz to 10 kHz), and digitized and saved for offline analysis at a sampling rate of 32 kHz. During recording days (maximally 5 days a week), a multi-electrode, computer-controlled microdrive (MT-EPS, Alpha Omega, Nazareth Illith, Israel) was attached to the recording chamber and used to transdurally insert up to 5 single-tip microelectrodes (typical impedance 0.3 to 1.2 MΩ at 1,000 Hz; FHC) or up to 2 linear microelectrode arrays (either V- or S-probes, Plexon, Dallas, Texas, United States of America or LMA, Alpha Omega; each with 24 or 32 contacts, inter-contact spacing either 100, 150, or 200 μm; 12.5 or 15 μm micrometer contact diameters) into motor cortex. In this study, we employ the term “site” for the recording obtained from each individual single-tip electrode (or from each linear array) recorded in individual behavioral sessions. The electrodes (or arrays) were positioned and lowered independently within the chamber (Flex-MT drive; Alpha Omega) in each session. Individual guide-tubes for each electrode/array were used that did not penetrate the dura (no guide was used for the more rigid LMA array). For single-tip electrodes, the reference was common to all electrodes and connected, together with the ground, on a metal screw on the saline-filled titanium recording chamber. For the linear array recordings, the reference was specific to each array type. S2 Table summarizes the different reference positions used. For the LMA (Alpha Omega), it was an insulated wire exposed at the tip, either emerged in the chamber saline, or attached with a crocodile clip to the probe stainless steel tube (which in turn was lowered into the chamber liquid, but not extending into brain tissue, as the lower part of the probe was epoxy-insulated). For the V- and S-probes (Plexon), in most cases the reference was the stainless steel shaft of the array (extending into brain tissue, in near proximity to the probe’s recording contacts). In a few sessions, the reference was instead placed on a skull-screw on the more posterior headpost (6/36 sites using V-probes in monkey T) or on a screw on the saline-filled recording chamber (2/50 sites using S-probes in monkey M). For both array types, the ground was either connected to a skull-screw of the remote titanium head-fixation post or to a screw of the titanium recording chamber. The manual (horizontal) work area of the monkey was scaled down with respect to the display on the monitor (by a factor of about 0.7). In manual (horizontal) space, the diagonal distance (center to center) between the fixation spot and peripheral targets was 6.5 cm. The required central fixation zone was defined to be within a radius of 0.3 cm, and the accepted touch zone of the peripheral targets had a radius of 1 cm. These touch zones corresponded to the hand cursor overlapping more than halfway with the fixation spot or the peripheral outlines. In the offline analysis of the hand signal, we used the spatial scaling of the visual scene on the computer monitor. The animal had to select the (valid) SC according to the color indicated by SEL (i.e., delayed color match to sample) and ignore the 2 (distractor) SCs of different colors. The GO signal was presented after the final 1,000 ms delay following SC3, prompting the animal to execute the center-out arm reaching movement to the memorized valid SC position. The GO signal was directionally non-informative, consisting in the simultaneous onset of 4 red light-emitting diodes (LEDs; embedded in a thin Plexiglas plate in front of the monitor) at the centers of the 4 circular target outlines. The RT and movement time each had a maximum allowance of 500 ms. The animal was trained to stop and “hold” within the correct peripheral target outline for 300 ms to obtain a reward. The touch of the valid target was signaled by an auditory tone (50 ms) and a completed hold with another tone (50 ms). Reward was delivered 500 ms after completed hold and consisted in a small drop of liquid (water or diluted fruit juice). Monkey T was not rewarded for non-hold trials, while monkey M was given a smaller reward on non-hold trials (on the valid target; 500 ms after breaking hold). For both animals, these non-hold trials were included in the analysis (about 10% of all included trials). All 4 diagonal target positions were equally likely for each SC. Thus, successive SC in the same trial could be presented in the same position. This resulted in 192 unique trials, combining the 3 color conditions with the 4 independent positions for SC1, SC2, and SC3. In monkey T, who was not willing to work for as many trials as monkey M, only 3 of the 4 target positions were used in each session (randomly selected for each session), to reduce the number of unique trials. For both animals, to ease the task, the 3 color conditions were presented separately in small blocks of approximately 15 unique trials per block, cycling across multiple blocks of the 3 color conditions to complete all the unique trials. The unique trials within each block were presented in pseudo-random order. Incorrect trials within a block were re-presented later in the same block, and each block was completed only when all unique trials in the block were correctly executed. The monkey initiated the trial by positioning the cursor inside the central hand fixation spot. This central touch ended the flashing of the fixation spot (which remained on) and was accompanied by an auditory tone, presented for 50 ms. After holding this central position for 1,000 ms, a selection cue (SEL) indicating the color to attend for that trial appeared on the screen for 300 ms, displayed behind but extending well beyond the central yellow disc and the overlying hand cursor. SEL consisted of one out of 3 differently colored polygons (blue, green, or pink; approximately 3-cm radius) defining the color condition. A 1,000 ms delay followed SEL offset. Thereafter, 3 peripheral spatial cues (SC1-3) were presented in sequence, each displayed for 300 ms, with 1,000 ms delay after each of them. The SCs were colored discs (0.9 cm radius), always presented in the temporal order blue-green-pink, each within one of the 4 peripheral red outlines. The 2 monkeys were trained to perform a visuomotor delayed match to sample task with fixed cue order and a GO signal ( Fig 1B ). The task required arm-reaching responses in one of 4 (diagonal) directions from a common center position, performed by holding a handle that was freely movable in the 2D horizontal plane. The visual scene was displayed on a vertical computer monitor (LCD; 75 Hz) in front of the monkey ( Fig 1A ). We here describe the monitor stimuli in cm units, but since the viewing distance was about 57 cm, this approximates to the same degrees of visual angle. Before the start of each trial, the monitor displayed the handle (hand cursor) position (small white square; 0.4 cm edges), a central fixation spot (yellow flashing disc; 0.45 cm radius), and the 4 possible peripheral target positions (red circular outlines; 1.5 cm radius at 9 cm diagonal distances from the center). The position of the cursor was updated on the monitor every 40 ms (approximately every third frame), but only if the accumulated displacement from the previous update exceeded 0.1 cm (to avoid flicker due to electronic noise). Subsequent to learning the visuomotor task (see below), the monkeys were prepared for multi-electrode recordings in the left hemisphere of the motor cortex (M1 and PMd), contra-lateral to the trained arm. In a first surgery, prior to completed task learning, a titanium head-post was implanted posteriorly on the skull, fixated with titanium bone screws and bone cement. In a second surgery, several months later, a cylindrical titanium recording chamber (19 mm inner diameter) was implanted. The positioning of the chamber above upper-limb regions of M1 and PMd was confirmed with T1-weighted MRI scans (prior to surgery in both animals and also postmortem in monkey M), and with intra-cortical electrical microstimulation (ICMS; as described in [ 70 ]) performed at the end of single-tip electrode recording days in the first recording weeks, in both monkeys. The recording sites included in this study spanned about 15 mm across the cortical surface in the anterior-posterior axis and only include sites determined with ICMS to be related to upper limb movements ( Fig 3C ). The exact border between PMd or M1 areas was not estimated. Two adult male Rhesus monkeys (T and M, 10 and 14 kg, respectively) participated in this study. Care and treatment of the animals during all stages of the experiments conformed to the European Commission Regulations (Directive 2010/63/EU on the protection of animals used for scientific purposes) applied to French laws (decision of the 1st of February 2013). The experimental protocol was evaluated by the local Ethics Committee (CEEA 071) and carried out in a licensed institution (B1301404) under the authorization 03383.02 delivered by the French Ministry of High Education and Research. Previously published studies using data from these 2 monkeys [ 20 , 27 , 28 , 66 – 68 ] were based on recordings from the opposite hemisphere during performance of another visuomotor task. The 2 macaques used in this study were monitored daily, either by the animal care staff or the researchers involved in the study. The facility veterinary controlled regularly the general health and welfare conditions of the animals. The animals were pair-housed, and toys and enrichment, usually filled with treats, were routinely introduced in their home cage to promote exploratory behavior. During task performance, the animals received liquid reward from a dispenser. The animals were water-restricted in their home cage, with free access to dry pellets. In the event of reduced liquid consumption during task performance, the minimum daily intake was reached by giving extra water and fruit or vegetables in the home cage, delayed for a few hours after the end of training. The daily fluid intake was never below 18 ml/kg, a low level for which it has been shown that macaques are able to effectively modulate their blood osmolality [ 69 ], based on each animal’s reference body weight (measured prior to entering the liquid restriction regime). On resting days (e.g., weekends), the animals received a complete ration of liquid in the form of water and fruits in the home cage. Statistical analysis Behavioral performance. All analyses of behavioral and neuronal data were conducted offline by using Matlab (The MathWorks, Inc.) and Python. Multiple comparison chi-squared tests using the Matlab function crosstab were used to compare percentages of distractor errors for the 3 color conditions and the 4 movement directions; 3-way ANOVAs were used to quantify the variability in RT across sessions, color conditions, and movement directions for each monkey. Finally, to determine whether there was any systematic within-session modulation of RT, we normalized (z-scored) the RT within each session (to compensate for any differences in average RT across sessions), before collapsing trials across all sessions for each monkey. We then calculated the Pearson correlation coefficient between RT and trial number inside each session. Hand position analysis. The hand position signals that were recorded with VCortex were realigned in time with the other data recorded by the DAQ system offline, by realigning the behavioral event codes and up-sampled (linear interpolation) from 250 Hz to 1 kHz. The hand position signals were calibrated (scaled) online in the VCortex configuration to match the visual display before storing on file, and in analysis, we used the spatial scaling of the visual scene in cm. The RTs for the center-out reaching movements were redefined offline using the hand trajectories. First, hand velocity and acceleration were computed in each trial, using a Savitsky–Golay algorithm. To determine reach movement onset, in a 2,000 ms duration epoch centered on GO, periods with prolonged increased velocity (>50 ms) above an empirically determined velocity-threshold (6 cm/s) were then detected, and the final, preceding increase in acceleration above an empirically determined acceleration-threshold (6 cm/s/s) was then taken as the time of movement onset. These RTs were confirmed in both animals by visual inspection of single trial trajectories in several sessions. We also quantified hand micro-movements during the maintenance of stable central hand position using hand velocity and position. Eye position offline calibration and analysis. In a majority of sessions, we recorded eye position with an infrared camera. A rough online calibration of the gain and offset of the eye X and Y signals were done during the first behavioral trials in each recording session, to compensate for small changes in head fixation or camera position compared to the previous day/session. This simplified online calibration was adopted to avoid training the monkey in a fixation task. The center of gaze was set to zero (center) while the monkey looked at the small yellow central target in order to place the hand cursor therein to initiate a new trial. Then, on some days the X or Y gain was updated slightly so that the spontaneous eye fixations on the peripheral target outlines matched their position in the Cortex software interface. The trials before calibration (typically 0 to 3 correct trials) were excluded in offline analysis involving eye movements. For data analysis, the eye signals recorded with the DAQ system were re-calibrated offline, to correct for the distortion induced by having the camera off the horizontal and vertical central axes of gaze. First, the raw eye signals were inspected visually to exclude from offline calibration and analysis the trials that were recorded before the completion of the rough online calibration, typically consisting in suppressing the 0 to 3 first correct trials in each session. Raw data were downsampled from the acquisition sampling frequency (1 or 30 kHz) to the camera sampling frequency (50 or 240 Hz) and linearly rescaled from bits to volts. We computed the eye velocity in volts/s using the Savitzky–Golay algorithm. For the offline calibration algorithm, we only considered data points that likely belong to fixation periods (i.e., whose velocity was lower than the lower 10th percentile of the total velocity distribution). At this stage, the superimposition of eye positions during these low velocity epochs across all trials in a session already showed an expected clustering of the data around 5 positions on the screen whose geometry resembled the center and 4 peripheral target positions used in the task. Thus, we were able to define boundaries in the voltage space to separate data points according to whether they were recorded when the monkey was looking within the work area (approximate boundaries of computer monitor) or when he was looking away from the work area (e.g., looking in the ceiling or signal saturation due to eye blinks). The low velocity (fixation) data occurring within the work area was then sorted into 5 clusters using a k-means algorithm (kmeans function in MatLab, using squared Euclidean distance). Cluster centers were assumed to represent the target positions in the voltage space. We next generated a 2D nonlinear model to compensate for the distortion due to camera position, between target coordinates on the screen (in cm) and voltage amplitudes of the corresponding centroids. This was achieved by adjusting a polynomial function to fit the relationship between each coordinate in the screen space to the XY coordinates in the voltage space. The correction was then applied to the complete eye traces. A detailed version of this correction can be found in [71]. Each data point was re-assigned to a cluster if it was located at a distance <2 cm from the target’s center coordinates, or assigned as being between clusters (but within the work area), or outside of the work area (incl. saturated). Eye position, velocity, and acceleration were then saved for further analysis, scaled in cm of the visual display, alongside cluster membership of each data point. Furthermore, the data points outside the work area that were beyond the lower or upper 0.99 percentiles of the boundaries of the raw X and Y voltage signals were marked as “saturated.” To detect the saccadic eye movements, we applied a recursive algorithm that seeks for the largest breakpoint in a piecewise stationary process, in a trial-by-trial fashion. First, we computed the cumulative 2D velocity of the eye signal in cm/s. This representation yields a pseudo staircase profile alternating between steep and slowly increasing periods over time. We extracted the highest decile of the velocity distribution and marked the corresponding steps in the staircase as boundaries to define periods when the subject was looking coarsely in the same area. These steps corresponded to blinks or to obvious large saccades and the steady periods were either fixation periods or multiple fixation periods with intermittent smaller saccades. During the steady periods, the cumulative distribution showed a slow increase due to noise originating from micro-movements and the recording device. The contribution of this noise being dependent on the location of the fixation on the screen, we compensated for it by subtracting the average slope for each period separately. This gave a piecewise stationary process that showed pseudo-horizontal steady epochs with better signal to noise ratio for the intermittent smaller saccades. Secondly, we applied a recursive algorithm to this process consisting, within a given time window, to compute at each data point the difference between the prior and the posterior average values. The maximum difference was extracted and compared to a threshold value computed after the velocity profile of a reference saccade (10 ms duration, 60 cm/s velocity peak). If the maximum difference was larger than the threshold, it was considered an actual transition and the time window was split in 2 at this time point. Starting with a time window covering the whole trial, the algorithm defined new (smaller and smaller) time windows at each iteration and the new window boundaries were considered as transitions. To avoid transitions to be detected multiple times, we introduced a “refractory period” of +/− 15 ms around accepted transitions. Fixation periods were finally defined by sorting the transitions between fixations into detected saccades or detected micro-saccades depending whether or not the Euclidian distance between the isobarycenter of 2 successive fixations was larger than a threshold (the change in eye position on the screen for an eye movement of 0.5 cm). Saccade onset/offset times were saved for further offline analyses alongside the other calibrated eye signals detailed above. Finally, eyeblinks were detected as 2 subsequent (<150 ms apart) eye signal velocity passings beyond a velocity threshold (500 cm/s for the 50 Hz sessions and 800 cm/s for the 240 Hz sessions). The data points in a window including the gap between these subsequent threshold passings, as well as a couple of preceding and subsequent flanker data points were marked as eyeblinks. Visual inspection confirmed that this method was able to distinguish between saccades and abrupt velocity increases due to eyeblinks, even if large standalone saccades sometimes had velocities beyond the thresholds used for eyeblink detection. Decoding task condition with behavior. We built 2 classifiers based on the temporal evolution of the hand velocity and the gaze position in single trials. In both cases, the signal was cut between SEL and GO and then downsampled by averaging in 50 ms non-overlapping windows in each trial. All correct trials in all sessions in which the specific behavioral signal was exploitable were included (n = 11,587 for hand velocity and n = 8,630 for eye gaze position). A random forest classifier was trained in 60% of the data and tested in the remainder 40%. This procedure was repeated in 20 different train/test splits to ensure stability of the results. The average confusion matrix across all runs was computed. The estimated chance level based on shuffling the data (100 shuffles per decoder) never exceeded 0.35. LFP spectral analysis and beta amplitude extraction. All sessions with sufficient quality of data were included in analysis. The raw signals were low-pass filtered offline at 250 Hz cutoff frequency (zero-phase fourth order Butterworth filter, using the butter and filtfilt functions in Matlab) to obtain the LFP signal, which was then downsampled to 1 KHz and saved for further analysis. For this study, we included only 1 contact for each of the linear array penetrations, selected to be well within cortex and with low noise (e.g., no heartbeat artifacts). LFP activity from 110 individual sites (63 with single-tip electrodes and 47 with linear arrays) in 59 sessions monkey T and 60 sites in 39 sessions (10 with single-tip electrodes and 50 with linear arrays) in monkey M were included in the analysis. A site is here defined as the conjunction of a specific chamber coordinate of the electrode entry and cortical depth, in one recording session. In the included LFP sites, trials with obvious artifacts (mainly due to teeth grinding, static electricity, or heart-beat signal) detected by visual inspection, were excluded from further analysis (12.3% of all trials in monkey T and 5.1% in monkey M). As the duration for which the monkeys were willing to work varied across sessions, after trial exclusion, the analyzed sites included on average 96.4 +/− 48.8 (STD) trials (range 19 to 184) in monkey T, and 147.3 +/− 80.3 trials (range 18 to 281) in monkey M. We also included the sites with few trials, since a majority of the neuronal data analyses were done on trials collapsed across many sites. Power spectral density (power for short) estimates of the LFP were obtained using the Pwelch function of Matlab. For LFP spectrogram examples (Fig 2A–2C), we highpass filtered the LFP with 3 Hz cutoff, using a fourth order Butterworth filter. Power was estimated for single-trial sliding windows of 300 ms duration, with 50 ms shifts, at 1 Hz resolution, before averaging across trials. For average spectrograms for each monkey (Fig 2E), we also used 300 ms sliding windows, 50 ms shifts, at 1 Hz resolution. For each individual LFP site, we first highpass filtered the signal (3 Hz cutoff, fourth order Butterworth filter), before calculating the power for each window in single trials. Next, the power matrix (trial × window × frequency) for each LFP was normalized by dividing by the mean power between 10 and 40 Hz across trials and windows for that LFP. We then computed for each window the grand average power across all individual trials for all normalized LFPs (i.e., each trial contributed equally to the grand mean, independent of the total number of trials for the specific LFP site). For single site and average spectrograms, we used a perceptually flat color-map [72], with color limits set to the minimum and maximum power values between 12 and 40 Hz between onset of SEL and GO, separately for each site or each monkey. To determine the peak frequencies of the 2 observed beta bands, we estimated power in a 900 ms epoch preceding SC1 (blue) onset, across all trials for each LFP site (after highpass filtering at 3 Hz; fourth order Butterworth filter). Within this epoch, we used five 500-ms windows, with overlap of 400 ms, to get 1 average power estimate per trial. We then normalized the power matrix (trial × frequency) for each LFP by dividing by its mean power across trials between 10 and 40 Hz (S3 Fig). To have a “fair” comparison of power across the 2 beta bands, despite the 1/f nature of the aperiodic signal component, the aperiodic signal component was removed in each site independently, before averaging across all trials of all sites and plotting the grand average spectral power (Fig 3A). In order to estimate (and remove) the aperiodic component of the signal, we used an approach similar to FOOOF [73]. Preliminary analysis using the standard FOOOF method showed that it was not estimating adequately the aperiodic component of the spectrum, presumably due to high power in the lower frequencies. Following their assumption that in a restricted frequency range, the aperiodic component is a straight line in the log-log space, and since we were interested in parametrizing the power only for the beta range, we decided to adapt their method to our specific need. We first computed the logarithm of the power of all single trials in each site. Once in the log-log-space, we calculated the average power per site and looked for 2 local minima; the last minimum before 10 and the last minimum before 50 Hz. The aperiodic component was estimated by a straight line connecting the 2 minima, and then removed from each single trial in that session. In the case one of the 2 local minima were not present in a session, the first line point was set at 10 Hz, and the second in the minimum value between 35 and 50 Hz. We also determined, for each individual trial, the frequency between 10 and 40 Hz with maximal power (beta peak frequency) in the periodic-only component of the signal (Fig 3A). Based on these distributions of power and peak frequencies, for both monkeys a range for the low band of 13 to 19 Hz and for the high band of 23 to 29 Hz were used to determine the dominant beta band for each LFP site. We computed a beta band dominance index using mean power in the periodic-only signal component across all trials and frequencies in the low band minus mean power across all trials and frequencies in the high band, divided by the sum of the two. Significance in band dominance was determined with a paired t test across trials, taking the mean power across all frequencies in each band for each trial (Fig 3B). Phase-locking of neuronal spiking to LFP beta phase. To verify that the LFP beta bursts were at least partially of local origin, we analyzed phase-locking of the simultaneously recorded neurons to the LFP beta phase of the site-dominant band. We included only the laminar recording sites, and tested phase locking for neurons across all laminar contacts to the LFP on the selected LFP contact on the same laminar probe, to ensure proximity of the 2 signals. We analyzed the delay before SC1 (blue), since the beta amplitude was generally strong in both animals and in both bands in this delay. Only neurons with more than 100 spikes in this delay, accumulated across all trials, were included. Beta phase was extracted from the Hilbert transformation of the beta-filtered LFP, only for the dominant beta band at each LFP site, and the phase at each spike time was determined. To quantify the phase locking, we first used Rayleigh’s test of non-uniformity of circular data [74]. To determine whether the locking was significant for individual neurons, a trial-shuffling method was used. Trial-shuffling is an efficient method for obtaining a “baseline” measure of phase locking, destroying the direct temporal relationship between the 2 signals, while preserving their individual properties such as rhythmicity and dependencies on external (task) events, and 1,000 repetitions of the phase-locking analysis (Rayleigh’s test) was done while randomly combining beta phases and spike times from different trials. If the original data yielded a larger z-statistic value from the Rayleigh’s test than 950/1,000 (equivalent to p < 0.05) of the trial-shuffled controls, the phase-locking of the neuron was considered significant. Decoding task condition with beta amplitude. Preprocessing for beta amplitude analysis. Given the similarity in the behavioral and neuronal data from the 2 animals up to this point, for all subsequent analyses we combined LFPs for both monkeys, while splitting low and high band dominant sites. We furthermore continued the analyses using the single-trial instantaneous beta amplitude. For each LFP site, we first bandpass filtered the signal to extract the dominant beta band, either 16+/− 4 Hz for low dominant sites or 26+/− 5 Hz for high dominant sites, using eighth order Butterworth filters. We next calculated the instantaneous amplitude (envelope) of the beta filtered LFP time series by constructing the analytic signal using the Hilbert transform. The LFP was then cut in trials, before normalizing the beta amplitude by subtracting the grand mean amplitude and dividing by the grand amplitude standard deviation. After normalization, individual trials for all LFP sites with the same beta band dominance were lumped to construct large matrices (trials × time) for each of the 2 beta bands, combining data from the 2 monkeys (Fig 4A). Decoding procedure. First, 2D reduction visualization (t-SNE) was used to explore whether the different color conditions were separable in each of the 2 beta bands (Fig 4B). Second, we built 2 classifiers using high and low beta bands separately, to decode color conditions. For each, the features were extracted from the temporal evolution of beta amplitude in single trials. We calculated the average beta amplitude in 50 ms non-overlapping time bins from touch to GO in each trial. A random forest estimator from scikit-learn library [75] was trained in the data. Using gridsearch, we found the parameters which maximized classifier performance for both frequency bands (max_depth = 80, max_features = 3, min_samples_leaf = 3, min_samples_split = 8, n_estimators = 200). Unambiguous correct trials (trials in which none of the distractors were presented in the same quadrant) were split in a 60% to 40% ratio between train and test set, respectively. To ensure stability of the method, we repeated the procedure using 20 different data splits, always with class balance in the train set. The average performance for each of the classes was computed by averaging across repetitions. After training the classifier on the unambiguous correct trials, the same model was used to predict distractor error trials (only including the trial in which the 2 distractors were not presented in the same quadrant). In this case, we predicted either the color of the attended (distractor) SC, or the color of SEL; i.e., either the SC the monkey actually used, or the SC the monkey should have used. The chance level was calculated by shuffling the labels in 100 train-test splits of the data for both high and low beta classifiers. All the accuracy values estimated in the different shuffle test-sets were below 0.37, which we set as the overall chance level for the results. [END] --- [1] Url: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002670 Published and (C) by PLOS One Content appears here under this condition or license: Creative Commons - Attribution BY 4.0. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/plosone/