context
stringlengths 107
1.69k
| A
stringlengths 104
2.81k
| B
stringlengths 113
1.79k
| C
stringlengths 105
3.22k
| D
stringlengths 119
2.36k
| label
stringclasses 4
values |
|---|---|---|---|---|---|
We preprocessed all ncRNA sequences by replacing ‘T’s with ‘U’s since they are both complementary to adenine and similar in structure (‘T’s for representing thymine in DNA, while ‘U’s for uracil in RNA). This resulted in a dataset involving 4 main types of bases (16 counted types of combinations in total: ‘A’, ‘C’, ‘G’, ‘U’, ‘R’, ‘Y’, ‘K’, ‘M’, ‘S’, ‘W’, ‘B’, ‘D’, ‘H’, ‘V’, ‘N’, and ‘-’).
|
The large-scale dataset used in the pre-training phase was collected from RNAcentral 47, the largest ncRNA dataset available to date. This dataset is a comprehensive collection of ncRNA sequences, representing all ncRNA types from a broad range of organisms. It combines ncRNA sequences across 47 different databases, resulting in a total of ∼similar-to\sim∼27 million RNA sequences (Supplementary Table 2, 3, 4).
|
Moreover, to minimize redundancy without compromising the size of our dataset (i.e., to preserve as many sequences as possible), we removed duplicate sequences using CD-HIT-EST, which was set to a 100% similarity threshold. After the above preprocessing steps, a final, large-scale dataset consisting of over 23.7 million ncRNA sequences was obtained. We named this final dataset ‘RNAcentral100’, and we used this dataset to train our RNA foundation model in a self-supervised manner (see RNA language model (RNA-FM) in the Supplementary Information for more details).
|
Although our RNA-FM can alleviate the problem of data scarcity, there is still less structural data available for RNAs than for proteins. As a result, we collected a non-redundant, self-distillation dataset with ground-truth secondary structure from the RNAStralign and bpRNA-1M databases. We filtered this dataset by removing sequences with more than 256 or fewer than 16 nucleotides, resulting in a dataset of 27,732 sequences. RhoFold+ was initially trained using only PDB data, which was then used to generate a self-distillation dataset by inferring pseudo-structural labels. We re-trained the model by sampling 25% of the PDB data and 75% of the distillation data for further improvement. During training, we masked out pseudo-label residues with pLDDT scores <0.7 and uniformly sub-sampled the MSAs to augment the distillation dataset.
|
Below, we detail how we constructed the large-scale non-coding RNA (ncRNA) dataset, followed by model and training details.
|
B
|
=positive momentum virial theorempositive momentum virial theorem\displaystyle\qquad=\qquad\mbox{positive momentum virial theorem}= positive momentum virial theorem
|
In genetics, the natural time increment to consider is discrete (generation), whereas in physics continuous-time is more natural. Thus, the discrete Price equation pertains to change in a trait after a single generation, whereas the virial theorem is formulated with continuous-time, and is additionally time averaged. However, the less intuitive forms of these equations that arise from the correspondences derived above may yield important insights. For example, the perspective of the virial theorem as a special case of the equipartition theorem [34] may be fruitful in evolutionary biology [31]. Translation between biology and physics via the virial theorem and the Price equation may also accelerate discovery of generalizations. While the stochastic Price equation in evolution [41] and the stochastic virial theorem in astronomy [7] were discovered independently, their similarity suggests other generalizations could similarly parallel each other. Moreover, the virial theorem has been applied in a variety of fields (for example economics [2]), meaning that understanding its relationship to the Price equation could be relevant beyond physics and biology.
|
The dynamical interpretation of evolutionary theory posits a correspondence between theories of evolution and Newtonian mechanics [46, 23]. In this framework, notions such as selection or mutation in biology are associated to forces in physics [46]. The identical form of equations (4) and (5) can constrain such associations and clarify subsequent analogies. For example, although the standard form of the virial theorem in (2) is an energy equation, equation (5) is a velocity equation, which in biology translates to rates of change of biological quantities. Furthermore, rate of change of a trait or phenotype, i.e., ddt𝔼(𝐳(t))𝑑𝑑𝑡𝔼𝐳𝑡\frac{d}{dt}\mathbb{E}({\bf z}(t))divide start_ARG italic_d end_ARG start_ARG italic_d italic_t end_ARG blackboard_E ( bold_z ( italic_t ) ) in equation (4) or the finite difference Δ𝐳¯(t)Δ¯𝐳𝑡\Delta{\bf\overline{z}}(t)roman_Δ over¯ start_ARG bold_z end_ARG ( italic_t ) in equation (3), corresponds to the momentum-averaged bulk velocity of a collection of physical objects. The remaining terms in each equation similarly have physical meaning in the context of classical mechanics in equation (5), or biology in equation (3).
|
The momentum-averaged positions 𝔼(𝐳(t))𝔼𝐳𝑡\mathbb{E}({\bf z}(t))blackboard_E ( bold_z ( italic_t ) ) and velocities ddt𝔼(𝐳(t))𝑑𝑑𝑡𝔼𝐳𝑡\frac{d}{dt}\mathbb{E}({\bf z}(t))divide start_ARG italic_d end_ARG start_ARG italic_d italic_t end_ARG blackboard_E ( bold_z ( italic_t ) ) are discrete analogs of momentum-averaged position and momentum velocity in electromagnetism, where they emerge from the virial density in an application of the virial theorem to electromagnetic pulses [11]. Whereas cov(𝐫(t),𝐳(t))cov𝐫𝑡𝐳𝑡\mathrm{cov}({\bf r}(t),{\bf z}(t))roman_cov ( bold_r ( italic_t ) , bold_z ( italic_t ) ) is frequently referred to as the selection term in the Price equation [5], the connection to the virial theorem suggests that it is better described as a selection rate (Table 1). Similarly, the transmission term 𝔼(d𝐳(t)dt)𝔼𝑑𝐳𝑡𝑑𝑡\mathbb{E}\left(\frac{d{\bf z}(t)}{dt}\right)blackboard_E ( divide start_ARG italic_d bold_z ( italic_t ) end_ARG start_ARG italic_d italic_t end_ARG ) is more accurately a transmission rate. Most significantly, while the dynamical interpretation typically relies on associating force to natural selection, drift, migration, or mutation [23], the equivalence between the virial theorem and the Price equation, suggests that force is more naturally associated to fitness. The correspondence of force to a rate of change is not surprising, since force is also a rate of change, specifically the rate of change of momentum. This stands more in line with the statistical interpretation of evolutionary theory [54, 53, 29], which, among several critiques of the dynamical interpretation, finds fault with the analogies of biological processes such as mutation with forces in physics, arguing that the physical forces are causal in a way that processes such as selection or mutation are not [54]. However, the analogy of population growth with force can be viewed as consistent with the dynamical interpretation; for example, population growth can directly affect DNA polymorphism patterns [55]. Moreover, the virial theorem in the setting of the ecological simple harmonic oscillator affirms [23] in noting that “natural selection turns out to be more similar to forces such as friction and elastic forces rather than the more canonical gravitation.”
|
Ultimately, analogies between the Price equation and the virial theorem point towards potentially productive directions for exploration in both biology and physics. The statistical framing of the virial theorem in (5) highlights phenomena that may have been overlooked in the physics realm. For example, the first term on the right-hand side of (5), namely cov(𝐫(t),𝐳(t))cov𝐫𝑡𝐳𝑡\mathrm{cov}({\bf r}(t),{\bf z}(t))roman_cov ( bold_r ( italic_t ) , bold_z ( italic_t ) ), can be understood to quantify the extent of the Yule-Simpson effect [33, 56, 45], which describes a situation where within-group trends can be reversed upon averaging. In biology, the Price equation has the potential to be used more widely as a tool. Although it has been hailed as a unifying framework for researchers [25], one that “can serve as a heuristic principle to formulate and systematize different theories and models in evolutionary biology” [26], the emphasis on its use has been more oriented toward understanding how it generalizes specific equations, rather than applying it for biological discovery. For example, the Price equation can be used to derive the Breeder’s equation [4, 57], Fisher’s fundamental theorem [37, 35], the house of cards approximation for genetic variance at mutation-selection balance [57, 50], and many other formulas and identities in genetics [57, 40]. However, it has been referred to as a tautology and a vacuous statement without application. In [51] the Price equation is described as a theorem that establishes that “If the left-hand side is computed as suggested in [36], and the right-hand side too, then they are equal.” This critique of the Price equation, namely that it does not and cannot serve as a tool, stands in contradiction to evidence from physics, where the mathematically equivalent virial theorem has been understood as a powerful tool since its use to discover dark matter in 1933 [58]. The manifold applications of the virial theorem [28] suggest that there is still much to gain from the application of the Price equation as a tool for biology. In fact, the equivalence we have demonstrated between the Price equation and the virial theorem shows that the description of missing heritability as dark matter [27] may be understood to be more than just an informal analogy between mysteries in genetics and astronomy.
|
A
|
K-fold cross-validation (CV) stands as the predominant methodology for machine learning assessment, with its advantages and limitations extensively explored, particularly in SO scenarios [9, 10, 11].
|
Although various methods have been developed to enhance performance estimation in model selection using k-fold CV, their design and implementation have been limited to SO problems. Tsamardinos et al. [12] compared double CV, the Tibshirani and Tibshirani method [13], and nested CV in their ability to improve the estimation of the fitness for SO problems. These algorithms modify fitness estimations but do not change the model selection process; the chosen model remains the same as it would be using simple hyperparameter optimization with k-fold CV for model evaluation. Automated ML (AutoML) tools offer an approach designed to explore various model and hyperparameter combinations. These tools aim to identify and deliver the most effective model along with an assessment of its performance. In Tsamardinos et al. six AutoML tools were compared [14]. Of these, only one had a predictive performance estimation strategy that could adjusts for multiple model validations (limitedly to SO problems and not affecting model selection), while most of the tools have the necessity to withold a test set for an unbiased estimation of the performance of the winning model, thus loosing samples from the final model training.
|
In biomarker discovery, the focus is often on optimizing the accuracy of machine learning models using the selected molecular features, while also minimizing the number of features to ensure clinical feasibility and resource efficiency. Characterising all the best compromises (or trade-offs) between predictive value and feature set size is a multi-objective (MO) optimization problem [3, 4], and it can be solved by means of MO feature selection (MOFS) techniques [5, 6, 7, 8]. These techniques aim to identify not just a single best solution, as in single-objective (SO) problems, but rather a Pareto front of solutions. This front is the set of optimal solutions that illustrate the trade-offs between different objectives. However, all candidate solutions (or biomarker models selected by the employed MOFS technique) are evaluated on the validation set, which can result in the overestimation of the performance of the selected models.
|
K-fold CV returns a model trained on all the available samples and an estimation of its performance computed by averaging k𝑘kitalic_k CV results. The obtained model performance is usually underestimated when only one hyperparameter configuration is used. However, it tends to be overestimated when multiple configurations are evaluated, and the model (or feature set) with the best performance is returned [2]. This situation becomes particularly pronounced in MO approaches, where identifying multiple favorable trade-offs often leads to an increased number of models to be evaluated.
|
Consequently, this does not lead to any improvement in the actual model selection process. On the other hand, strategies that involve subtracting a constant value from model evaluation metrics, used for ranking multiple solutions, fall also short. This approach alone is insufficient to significantly impact the ranking order and thus remains ineffective for enhancing the model selection process. However, if the adjustment takes into account some characteristic of the models, such as the variance in performance during the validation phase or the feature set size, it might change their order. It may be argued that such an adjustment could improve the model selection if the model ranking would use the adjusted estimations. In summary, to the best of our knowledge, no previous work experimented the effectiveness of methods for mitigating the overestimation in MO problems using ML algorithms. Additionally,
|
C
|
The data set consists of 39 cases with different levels of misalignment between the different b-value image volumes. For each subject, we chose only one slice where the ROI in the right lung was labeled. The images were then cropped to a shape of 96×96969696\times 9696 × 96 and normalized by the 0.99 quantiles of the DWI image acquired without diffusion gradients (b-value=0 sec/mm2).
|
To ensure the reproducibility of our findings, we established two distinct, non-overlapping groups of 16 cases each for hyperparameter tuning. The composition of these groups was planned to encapsulate a wide array of gestational ages, thereby encompassing nearly the full breadth of ages present in our dataset. We conducted hyperparameter tuning for each group independently, as detailed in Section 4.3. The remaining 23 cases, that were left out in each group are designated as the test cases for each group. Our primary findings and analysis will be conducted on these specific cases.
|
Figure 7: The correlations between f and the GA in the canalicular stage are calculated in two datasets: group 1 test cases and group 2 test cases.
|
The tuning process was conducted separately for two distinct groups, each comprising 16 cases, and was performed independently. The criterion used for selecting the optimal hyperparameters was based on the correlation between the IVIM parameter f𝑓fitalic_f and gestational age during the canalicular stage of fetal development (GA <<< 26 weeks).
|
Figure 7 displays correlations derived from two test groups, where each group’s test cases include those from the opposing original group (the 16 cases for the hyperparameter tune). Average IVIM parameters were computed in the ROI for each case across all evaluated methods, utilizing the best hyperparameters for each group as detailed in 5.1 for the IVIM-Morph calculations. Notably, IVIM-Morph demonstrates greater consistency in terms of correlation between the two groups, unlike other methods (except TRF-SLS and Syn - Reg to b0subscript𝑏0b_{0}italic_b start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT), which showed varying correlations across the groups.
|
A
|
These issues become more significant when using RL to model biological systems, as biological agents rarely behave deterministically even after learning is complete.
|
This form of cost was employed as biological control costs in bacterial chemotaxis [21] and immunological learning [48].
|
The parameter σ∈ℝ≥0𝜎subscriptℝabsent0\sigma\in\mathbb{R}_{\geq 0}italic_σ ∈ blackboard_R start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT is the randomness of the agents’ motility, determined by the scaling of time and space.
|
This persistent stochasticity reflects the inherent randomness of biological processes, making deterministic control strategies costly and hence unnatural [48].
|
This principle has been applied to various biological processes from synaptic signaling to embryo fertilization, where the arrival time of the fastest individuals is more critical than the population average.
|
C
|
Although previous studies used a random split[8, 33, 50], we observe that, due to the strong correlation between successive structures sampled by molecular dynamics simulations, a random split allows networks to achieve high validation scores even when they have memorized the training data rather than learned useful abstractions from it; the models then perform poorly on independent data. Consequently, for all our numerical experiments, we split the data into training and validation sets by time. That is, we select the first 50% of the long trajectory for training and the remainder for validation. If we had access to multiple independent trajectories, randomly choosing trajectories for training and validation would also be appropriate. Some studies split the trajectory into equal segments and draw random segments for training and validation [65, 46] (k-fold cross-validation). When there are only two segments, this approach is identical to ours. When every structure is its own segment, one recovers the random split. Intermediate numbers of segments result in intermediate amounts of correlation between the training and validation sets. In cases where we vary the amount of training data, we first select the first 50% of the trajectory and then divide only this half of the trajectory into segments that we draw randomly for training; the second half of trajectory is used as hold-out validation set. This approach is fundamentally different from cross-validation and minimizes the correlation between the training and validation datasets.
|
Although previous studies used a random split[8, 33, 50], we observe that, due to the strong correlation between successive structures sampled by molecular dynamics simulations, a random split allows networks to achieve high validation scores even when they have memorized the training data rather than learned useful abstractions from it; the models then perform poorly on independent data. Consequently, for all our numerical experiments, we split the data into training and validation sets by time. That is, we select the first 50% of the long trajectory for training and the remainder for validation. If we had access to multiple independent trajectories, randomly choosing trajectories for training and validation would also be appropriate. Some studies split the trajectory into equal segments and draw random segments for training and validation [65, 46] (k-fold cross-validation). When there are only two segments, this approach is identical to ours. When every structure is its own segment, one recovers the random split. Intermediate numbers of segments result in intermediate amounts of correlation between the training and validation sets. In cases where we vary the amount of training data, we first select the first 50% of the trajectory and then divide only this half of the trajectory into segments that we draw randomly for training; the second half of trajectory is used as hold-out validation set. This approach is fundamentally different from cross-validation and minimizes the correlation between the training and validation datasets.
|
The computational costs for training VAMPnets with different token mixers are shown in Figure 8. The simplest GNN using pooling is not much more computationally costly than an MLP that takes distances between Cα atoms as inputs. The GNNs with token mixers are about an order of magnitude more computationally costly but still manageable (hundreds of seconds) even without advanced acceleration techniques such as flash-attention or compilation. We expect the memory and computational requirements to scale with token number quadratically for SubFormer and subquadratically for SubMixer (depending on the expansion dimension in the token-mixing blocks); these requirements should scale linearly with respect to embedding dimension and network depth.
|
For chignolin, the GNNs clearly outperform a multilayer perceptron (MLP) that takes distances between pairs of Cα atoms as inputs; there is not a significant difference between pooling and the mixers considered. For trp-cage and villin, the GNN with pooling consistently achieves the lowest scores. The distance-based MLP and GNN with SubMixer perform comparably, presumably because the distances between pairs of Cα atoms are sufficient to describe the folding (in contrast to chignolin, as we discuss below). For villin, we combined SubMixer and SubFormer with GVP and augmented them with a global token; this enables the GNNs to outperform the distance-based MLP. The improvement is particularly striking for SubFormer. The models with GVP are more expressive because they use the equivariant features at the token mixing stage and directly mix them with global features after message-passing.
|
For each system and token mixer architecture pair that we consider, we independently train three VAMPnets using different random number generator seeds (and the training-validation split described in Section IV.4). We report the training and validation VAMP-2 scores for the different token mixer architectures for each of the three systems in Figures S1 and S2. For chignolin, GNNs with pooling (summing), SubMixer, and SubFormer reach approximately the same maximum validation score. GNNs with SubMixer and SubFormer require fewer epochs to reach the convergence criteria, but they require more computational time per epoch, as we discuss in Section VI. For both villin and trp-cage, the token mixers generally outperform pooling.
|
D
|
S(0)=S0𝑆0subscript𝑆0S(0)=S_{0}italic_S ( 0 ) = italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, E(0)=E0𝐸0subscript𝐸0E(0)=E_{0}italic_E ( 0 ) = italic_E start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, I(0)=I0𝐼0subscript𝐼0I(0)=I_{0}italic_I ( 0 ) = italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, Q(0)=Q0𝑄0subscript𝑄0Q(0)=Q_{0}italic_Q ( 0 ) = italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, and H(0)=H0𝐻0subscript𝐻0H(0)=H_{0}italic_H ( 0 ) = italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.
|
We classify this decision-making framework as a population game, where an individual’s payoff is determined by their own behaviour as well as the collective behaviour of the community. The players are individuals exposed (E(t)𝐸𝑡E(t)italic_E ( italic_t )) to infection. i.e., the strategy update takes place after exposure to infection in the context of disease dynamics. It is worth noting that individuals who choose to self-isolate are aware of their potential exposure to infection and voluntarily participate in self-quarantine. The decision-making is not based on symptoms of infection. Let x𝑥xitalic_x (0≤x≤10𝑥10\leq x\leq 10 ≤ italic_x ≤ 1) represent the fraction of the players who disclose their infection and opt for quarantine. It might be interesting to note that players of the present generation compete not only with players of the previous generation but also with players from past generations who have similar behaviours since the individual choice relies on the current illness prevalence and availability of hospital treatment. We suppose that people imitate other individuals’ behaviour, which is more likely for strategic decision-making in various social engagements. In particular, they adopt strategies from other members with a likelihood proportional to the projected pay-off increase if the sampled individual’s pay-off is greater [33, 9]. Individuals are presumed to choose their strategy based on the perceived benefits of disclosing the infection and quarantine.
|
We model the human population consisting of six distinct compartments: susceptible (S), exposed (E), infected (I), quarantined (Q), hospitalized (H), and recovered (R). Denoting S,E,I,Q,H and R𝑆𝐸𝐼𝑄𝐻 and RS,E,I,Q,H\text{ and $R$}italic_S , italic_E , italic_I , italic_Q , italic_H and italic_R as the number of individuals in the respective compartments, the total population (N𝑁Nitalic_N) is given by N=S+E+I+Q+H+R𝑁𝑆𝐸𝐼𝑄𝐻𝑅N=S+E+I+Q+H+Ritalic_N = italic_S + italic_E + italic_I + italic_Q + italic_H + italic_R. Figure 2 displays the schematic of the disease model, and Table 1 describes the parameters used in the model. At the onset of disease spread, a susceptible individual who comes in contact with an infected individual may or may not develop symptoms. We define the exposed state (E) as the subpopulation with a suspected infection. The mean transmission rate of individuals from the S𝑆Sitalic_S to the E𝐸Eitalic_E state is denoted by βssubscript𝛽𝑠\beta_{s}italic_β start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT.
|
We do not explicitly mention the recovery compartment R𝑅Ritalic_R here in the model equation (2.1), as individuals become completely immune after recovery and the population is closed. x𝑥xitalic_x represents the percentage of exposed individuals who choose to disclose the exposure to infection and eventually be quarantined (see game-theoretic model below for a description of x𝑥xitalic_x).
|
Figure 4. The dynamics of different trajectories of the model for various values of disease transmission rates, βssubscript𝛽𝑠\beta_{s}italic_β start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT: (a) infected population, (b) quarantine population, (c) hospitalized, and (d) proportion x𝑥xitalic_x of individuals who choose to disclose their infection, with κ=0.5𝜅0.5\kappa=0.5italic_κ = 0.5, A=10𝐴10A=10italic_A = 10, and ps=0.8subscript𝑝𝑠0.8p_{s}=0.8italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 0.8. In Figure (a), the dotted line indicates the dynamics of the infected population when no individual chooses to disclose exposure, i.e., x=0𝑥0x=0italic_x = 0. These comparative trajectories underscore the importance of the decision game on the infection burden.
|
C
|
At the onset of the epidemic, m𝑚mitalic_m individuals are randomly selected as initially infectious. Each infectious individual remains in this state for a duration drawn from an exponential distribution with rate γ𝛾\gammaitalic_γ. During this period, the individual contacts their immediate neighbors according to a Poisson process with intensity β𝛽\betaitalic_β. If a contacted neighbor is susceptible, they become infectious immediately. Once the infectious period ends, the individual recovers and becomes immune to further infections. All infectious periods and Poisson processes are assumed to be independent (see, e.g., [12]).
|
At the onset of the epidemic, m𝑚mitalic_m individuals are randomly selected as initially infectious. Each infectious individual remains in this state for a duration drawn from an exponential distribution with rate γ𝛾\gammaitalic_γ. During this period, the individual contacts their immediate neighbors according to a Poisson process with intensity β𝛽\betaitalic_β. If a contacted neighbor is susceptible, they become infectious immediately. Once the infectious period ends, the individual recovers and becomes immune to further infections. All infectious periods and Poisson processes are assumed to be independent (see, e.g., [12]).
|
Figure 1: SIR Dynamics on Network. Blue nodes represent susceptible individuals, while red and pink ones represent the initially infected and secondarily infected individuals, respectively. The black node indicates a removed individual. Dashed half-edges connect uniformly at random to form solid edges.
|
A stochastic model represents contact patterns during an epidemic as a graph, where nodes correspond to individuals, and edges denote potential transmission routes. The stochastic SIR epidemic process on a network of size n𝑛nitalic_n can be described as follows.
|
In many applications, it is useful to track the spread of an epidemic while simultaneously constructing the underlying transmission network. This can be achieved by generating a random graph in tandem with modeling the epidemic’s spread. The process, illustrated in Figure 1, begins by assigning to each node a number of unconnected half-edges based on the degree distribution p𝑝pitalic_p. Static connections are then formed by matching the half-edges of infectious nodes to other available half-edges in the network as part of the Poisson contact process with intensity β𝛽\betaitalic_β, as described earlier. If the connected node is susceptible, transmission occurs. Otherwise, the infectious individual attempts to connect its remaining half-edges to other available half-edges, repeating this process until all half-edges are matched or no further connections are possible. While this matching process may occasionally result in self-loops or multiple edges between two nodes, such occurrences are negligible in the limit of large n𝑛nitalic_n.
|
D
|
We observe how different switching statistics can result in qualitative differences in simulated locomotory paths. Our model also shows how elevated input rates from even single classes of neurons can significantly alter switching statistics and therefore C. elegans behavior.
|
Figure 1: (a) Partial connectome from [47] containing 198 sensory neurons, interneurons, and motor neurons (most motor neurons that form neuromuscular junctions with muscle cells are excluded). The 15 neurons that will be selected as core neurons are shown in purple. Additionally, the partial connectome contains 137 neurons that are directly connected to the core neurons either presynaptically or through gap junctions, and 46 neurons that are connected through an intermediary neuron. (b) Core neurons selected for the premotor network model and the connections among them. We categorize most core neurons as either forward or reversal based on the connectome and previous experimental work. Core neurons receive input from signal neurons that are presynaptic to the core neuron set or connected via gap junctions. Highly connected signal neurons shown in Figure 2.
|
One of the challenges in building such a model is the extraction of a suitable subnetwork. C. elegans neurons are highly recurrently connected; they do not process information in a feed-forward manner. Many premotor neurons are known to be “hubs" in the network with extensive connections [44]. These network features obfuscate which subset of neurons are responsible for determining locomotion – yet analyzability
|
A subset of interneurons — premotor neurons — are chiefly responsible for determining the most common locomotory behaviors [38, 17, 50].
|
We determine β𝛽\betaitalic_β, 𝐀𝐀\mathbf{A}bold_A, and 𝐝𝐝\mathbf{d}bold_d simultaneously by performing multiple linear regressions to approximate 𝐀𝐀\mathbf{A}bold_A and 𝐝𝐝\mathbf{d}bold_d for different values of β𝛽\betaitalic_β across 22 datasets selected from Ref. [1]. Our criteria for the selection of these 22 datasets is that the dataset must contain at least one of the following forward core neurons — AVBL, AVBR, RIBL, or RIBR — because they are sparsely labeled in the datasets and one neuron in each class of the reversal neurons and AVD. To reduce the error from missing neurons we perform data replacement on missing time series when possible by substituting missing time series with the time series of highly correlated neurons. Certain pairs of left/right neurons are consistently highly correlated and therefore one neuron’s time series can be used as a proxy for the other if one neuron in the pair is missing. We label neuron pairs as highly correlated if they are on average at least 70% correlated in the datasets where they both appear; 30 neurons fit this criteria. For these highly correlated neurons, when one of the neurons is missing we use the time series of the other neuron as a proxy in the regression.
|
B
|
Aside from instantaneous energetics, gait transition from walking to running has been attributed to muscle force-velocity behavior [31], interlimb coordination variability [32], mechanical load or stress [33, 18], and cognitive or perceptual factors [34, 35]; see [36] for a review. However, none of these factors can show why there may be hystereses between the walk-to-run and the run-to-walk speeds on treadmills [37]. And importantly, we know it is unclear how any of these ‘instantaneous’ theories with the gradual walk-run-mixture-based gait transition regime behavior we have demonstrated in this manuscript. Predicting a walk-run mixture over a full bout as being optimal must necessarily require a theory that integrates some performance measure over the entire bout.
|
Subjects used a mixture of walking and running in 90% of the trials at the two intermediate speeds (2.22 and 2.6 ms-1). On average, walking dominates the walk-run mixture at the lower speeds and running dominates the walk-run mixture at the higher speeds (Figure 2a), so that the walk-run mixture gradually changes as speed is increased. The time fraction of walking decreases and the time fraction of running increases with average speed. Thus, for this overground gait transition task, there is no sharp gait transition speed but only a “gait transition regime”, which has substantial overlap with the speed range 2 m/s to 3 m/s. Despite the substantially greater distances involved (800 m and 2400 m), the overall trends in the walk-run fraction is similar to that observed for trials with much smaller distance (120 m) in the earlier study [1].
|
Pure energy optimality in the absence of fatigue or any uncertainty predicts that in the regime where a walk-run mixture is optimal, there should be exactly one switch between walking and running. Multiple switches between walking and running is not optimal. Here, we find that the median number of switches is one for three out of the four speeds we considered, with speed 2.62 m/s having two switches as median, with some variability around this median (Figure 2b). If we assume the cost of a single switch to be m(Vrun2−Vwalk2)/2𝑚superscriptsubscript𝑉run2superscriptsubscript𝑉walk22m\left(V_{\mathrm{run}}^{2}-V_{\mathrm{walk}}^{2}\right)/2italic_m ( italic_V start_POSTSUBSCRIPT roman_run end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - italic_V start_POSTSUBSCRIPT roman_walk end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) / 2 approximately [27, 28], we estimate about 3.4 J per unit body mass when the walking speed is 1.5 m/s and the running speed is 3 m/s. This gait and speed switch cost of 3.4 J/kg is negligible compared to about 2300 J/kg for walking 800 m at 1.5 m/s, using steady walking costs from [29, 30]. Despite this cost per switch being negligible, humans only switch gaits a small number of times, still keeping the switching cost negligible.
|
Humans and many other terrestrial animals exhibit a number of different gaits [1, 2, 3, 4, 5]. Humans walk, run, and much more occasionally, skip [6, 7]. Horses walk, trot, canter, and gallop, and more occasionally, use other gaits [3, 8, 9]. Such gait transitions have most commonly been studied using treadmills (Figure 1). In these treadmill gait transition experiments, the treadmill speed is changed slowly, either in a continuous fashion with some fixed acceleration, or in a series of acceleration phases alternating with constant speed phases [10, 11, 12, 13, 14, 15, 16, 17, 18]. These experiments found that people switch between walking and running around 2 m/s, but sometimes, the walk to run transition speed was different from — and higher than — the run to walk transition speed [18, 11, 12] and the transition speeds were different from that predicted by energy optimality [10]. In such treadmill experiments, the gait transition is ‘sharp’, that is, happens at a particular speed where there is a preference of running over walking, or vice versa.
|
We have considered one kind of overground gait transition, in which the task is traveling a given distance at a desired average sub-maximal speed. Another kind of ecological overground gait transition that might have been common in our evolutionary past is to be walking normally and then having to accelerate to a higher running speed to either chase prey or evade a predator. Piers et al [38] considered such a gait-transition task, but this is a substantially different task from that considered here from an energy optimality perspective, as it will be dominated by the cost of changing speeds and the cost of switching gaits [27, 28] – so we do not a priori expect identical gait transition speeds.
|
D
|
We found that decreasing the force is more costly than increasing the force by having different coefficients in the model for positive and negative force rate (3). One reason positive and negative force rate may have different costs may be due to decrease force, the calcium needs to be pumped back to the sarcoplasmic reticulum which incurs a metabolic cost [42, 43]. This calcium pumping cost is in addition to the ATP activity that sustains repetitive actomyosin activity required for force maintenance. At the individual muscle level, metabolic measurements have been performed for continuous or intermittent electrical stimulation in-vivo or in-vitro. These studies suggest that the cost for intermittent activation is more than continuous [44, 45, 46, 47] which is an analogous to say that the cost of producing sinusoidal force is more than constant force. But these studies did not perform experiments comprising different activation and relaxation times, which is analogous to having different upward and downward sinusoid slopes in our experiments.
|
In the main manuscript, we expressed the metabolic cost as a function of external force and force rate, using a single-link model. Now, we consider a limb with multiple joints and multiple muscles. As in our experiment, this limb at rest needs to produce a one-parameter family of external forces and force rates, all along the same direction but of different force and force-rate magnitudes. We now show that if all the muscles power-law metabolic cost, all with the same force exponent and same force rate exponent, the energy optimal muscles forces for the task follow a ‘linear scaling strategy.’ That is, if the energy optimal solution is known for one external force and force rate magnitude, the optimal solution for any other external force and force rate magnitude is obtained by linearly scaling all the muscle forces by one scalar factor and the force rate magnitudes by a different scalar factor.
|
We found that decreasing the force is more costly than increasing the force by having different coefficients in the model for positive and negative force rate (3). One reason positive and negative force rate may have different costs may be due to decrease force, the calcium needs to be pumped back to the sarcoplasmic reticulum which incurs a metabolic cost [42, 43]. This calcium pumping cost is in addition to the ATP activity that sustains repetitive actomyosin activity required for force maintenance. At the individual muscle level, metabolic measurements have been performed for continuous or intermittent electrical stimulation in-vivo or in-vitro. These studies suggest that the cost for intermittent activation is more than continuous [44, 45, 46, 47] which is an analogous to say that the cost of producing sinusoidal force is more than constant force. But these studies did not perform experiments comprising different activation and relaxation times, which is analogous to having different upward and downward sinusoid slopes in our experiments.
|
Here, we focus on developing a metabolic cost model applicable to isometric tasks involving arbitrary time-varying force production based on joint torque and torque rate, which includes constant force as a special case. In previous work, we showed that the metabolic cost of near-constant isometric force scales non-linearly with force [8]. Van der Zee and Kuo [28] showed that force-rates have a substantial energy cost by having subjects produce forces with different frequencies. But these two studies [8, 28] did not independently change force and force rates, so either do not have information on the cost of force rates or cannot distinguish the effect of a nonlinear metabolic cost dependence on force versus force rate. More generally, previous in-vivo experiments usually involved univariate sweeps along some exertion parameters [8, 29, 30, 28]. Here, we performed extensive human subject experiments with diverse force levels and force changes in a manner that allows us to characterize the independent contributions of force and force rate on the metabolic cost of time-varying forces. We show that a simple additive model with a nonlinear power-law cost for force and force-rates is sufficient to explain the metabolic cost of force production. Further, we examine forces with different increasing and decreasing force rates, allowing us to show that the cost of decreasing forces is higher than the cost of increasing forces. Finally, while our metabolic cost model is at the level of human joints, we provide mathematical arguments for how this joint-level model may extend to the individual muscle-level.
|
An alternative explanation for different costs for increasing and decreasing is the use of co-contraction to reduce the output force quickly by activating the antagonist muscles to achieve the required negative force rates. Co-contraction or pre-activation of muscles is seen in a variety of ecological tasks, so increased cost of negative force rates may be behaviourally relevant.
|
D
|
Pα=rγ(α)γ(α)(r+λ)+rλ.subscript𝑃𝛼𝑟𝛾𝛼𝛾𝛼𝑟𝜆𝑟𝜆P_{\alpha}=\frac{r\gamma(\alpha)}{\gamma(\alpha)(r+\lambda)+r\lambda}\;.italic_P start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = divide start_ARG italic_r italic_γ ( italic_α ) end_ARG start_ARG italic_γ ( italic_α ) ( italic_r + italic_λ ) + italic_r italic_λ end_ARG .
|
We also studied the evolution of the mutated pathogen during the time it spreads, as presented in Fig. 4. In this figure, it is shown how pathogens with greater values of γ𝛾\gammaitalic_γ appear during the epidemic spread. The figure presents the average and the greatest value of γ𝛾\gammaitalic_γ of the infections in a specific simulated week. The pathogens evolution can be seen in the bottom figure, where for each week during the simulations the greatest γ𝛾\gammaitalic_γ pathogen that appeared during that week is presented. Then, after some time, the pathogens with the greater values infect more new nodes, and the average γ𝛾\gammaitalic_γ value increases respectively (top figure). At the same time, the great γ𝛾\gammaitalic_γ pathogens (as well as the other pathogens) stop infecting nodes, as a result of the network structure and its finite size, limiting the number of infection steps, which also limits the pathogen’s mutation.
|
Another advantage of the square grid graph is the ability to easily visualize the graph’s structure and better understand the flow of the epidemic in it. Figure 8 displays the mutation of the pathogen during the epidemic spreading in the network. The α𝛼\alphaitalic_α value of each node is presented as a color in the figure. Dark pixels represent never infected susceptible nodes. The initial infected node is colored in red. It can be seen that close to the initial node where pathogens are likely to have smaller value of γ𝛾\gammaitalic_γ, more nodes remain susceptible, i.e. the infection probability is lower. As the distance from the initial node increases, these holes become less and less common. That is, after the pathogen mutates the infection probability becomes larger (due to the decreased mortality, leading to higher transmissibility values) and thus far nodes have a much lower probability to stay healthy without getting infected. At the same time, farther from the initial node we can see much higher variance in the parameter value. This can be explained by what biologists define as the “founder affect”, when in each area the infection can be envisioned as a tree with its founder as a root. The parameter value for this “founder” node is likely to have a large effect on the properties of pathogens existing in the area around it. Thus, one can see that in the figure there are patches of brighter and darker colors, which are usually brighter than the vicinity of original infection, but have a large variation that is due to the exact parameter value for the pathogens that arrived at this direction.
|
The solution of the integral yields the probability of infection for each pathogen. Eq. (8) agrees with the intuition, in that pathogens with a longer mean lifetime are more likely to infect their susceptible neighbors. Respectively, as time goes on we expect both the number of mutated pathogens, and the values of γ(α)𝛾𝛼\gamma(\alpha)italic_γ ( italic_α ) of each pathogen to increase.
|
Figure 2 demonstrates how higher values of α𝛼\alphaitalic_α (and γ𝛾\gammaitalic_γ as a result, since γ𝛾\gammaitalic_γ is dependent on α𝛼\alphaitalic_α, see equation 4) impact the spread of pathogens in the network, where pathogens with higher values of α𝛼\alphaitalic_α (or γ𝛾\gammaitalic_γ) gain more and more dominance over the network, when they infect a greater fraction of the new infected offspring. These results are in good agreement with Eq. (9), where pathogens with a greater γ𝛾\gammaitalic_γ (and a greater α𝛼\alphaitalic_α respectively) are more likely to infect new nodes. It should be noted that the right tail in each scatter diagram in which the proportion of pathogens with large α𝛼\alphaitalic_α seemingly decreases, is due to the finite time of the simulations which does not enable these pathogens, appearing in later stages of the simulations, to infect a significant portion of susceptible nodes. In any case, the principle of proportional growth with increasing α𝛼\alphaitalic_α is conserved, and as time goes on we are more likely to see pathogens with higher values of γ(α)𝛾𝛼\gamma(\alpha)italic_γ ( italic_α ).
|
C
|
Line (g) is a plot of the logistic equation whose asymptote, N∗superscript𝑁N^{*}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT,
|
N∗superscript𝑁N^{*}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, is the same as a population brought to steady-state by disease
|
the same broad shape as the 5-locus haplotype distribution of Fig.3, but the linear portion of the distribution is limited to the first
|
but the values of N∗superscript𝑁N^{*}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT emerge as relative abundances or
|
state, r=αN∗𝑟𝛼superscript𝑁r=\alpha N^{*}italic_r = italic_α italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, where N∗superscript𝑁N^{*}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is the steady-state density
|
A
|
We study the evolution of recognition sites in the low-mutation regime, where site sequences are updated sequentially by substitutions in an evolving population. The rates of these substitutions depend on the corresponding mutation rates and on selection coefficients scaled by an effective population size N𝑁Nitalic_N, as given by Haldane’s formula [51] (Appendix A). Our minimal model contains three types of updating steps. First, point mutations of the site sequence occur at a homogeneous rate μ𝜇\muitalic_μ per unit of length (Fig. 2A). Hence, a recognition site with k𝑘kitalic_k matches and ℓ−kℓ𝑘\ell-kroman_ℓ - italic_k mismatches has a total rate μ+(γ,ℓ)=μℓγγ0/(1−γ0)subscript𝜇𝛾ℓ𝜇ℓ𝛾subscript𝛾01subscript𝛾0\mu_{+}(\gamma,\ell)=\mu\ell\gamma\gamma_{0}/(1-\gamma_{0})italic_μ start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ) = italic_μ roman_ℓ italic_γ italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT / ( 1 - italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) of beneficial mutations and a total rate μ−(γ,ℓ)=μℓ(1−γ)subscript𝜇𝛾ℓ𝜇ℓ1𝛾\mu_{-}(\gamma,\ell)=\mu\ell(1-\gamma)italic_μ start_POSTSUBSCRIPT - end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ) = italic_μ roman_ℓ ( 1 - italic_γ ) of deleterious mutations (Appendix A). These changes have selection coefficients ±sγ(γ,ℓ)plus-or-minussubscript𝑠𝛾𝛾ℓ\pm s_{\gamma}(\gamma,\ell)± italic_s start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ) and substitution rates u±(γ,ℓ)subscript𝑢plus-or-minus𝛾ℓu_{\pm}(\gamma,\ell)italic_u start_POSTSUBSCRIPT ± end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ), which depend on the fitness landscape F(γ,ℓ)𝐹𝛾ℓF(\gamma,\ell)italic_F ( italic_γ , roman_ℓ ) introduced above.
|
Third, the recognition target sequence changes at a rate ρ=κμ𝜌𝜅𝜇\rho=\kappa\muitalic_ρ = italic_κ italic_μ per unit of length (Fig. 2D). This rate is assumed to be comparable to the point mutation rate μ𝜇\muitalic_μ and to be generated by external factors independent of the recognition function similar to compression and extension mutations. The selective effects of target changes take the form of a fitness seascape, generating an explicitly time-dependent fitness of a given recognition site sequence [52]. On the recognition-fitness map Fr(γ,ℓ)subscript𝐹𝑟𝛾ℓF_{r}(\gamma,\ell)italic_F start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ) and in the low mutation regime, target changes act as additional effective mutations that change the free energy ΔGΔ𝐺\Delta Groman_Δ italic_G and the recognition R𝑅Ritalic_R with respect to the moving target. Examples for recognition target sequence mutations are mutations in DNA binding domains of transcription factors. Experiments have shown that the effects of mutations in the DNA binding domain of LacI in E. coli can be reduced to a change in binding energy, similar to mutations in the DNA itself [53]. Importantly, the selective effects of target mutations are again given by the fitness landscape F(γ,ℓ)𝐹𝛾ℓF(\gamma,\ell)italic_F ( italic_γ , roman_ℓ ) but their rates are not: beneficial and deleterious changes occur at neutral rates ρ+(γ,ℓ)=ρℓγγ0/(1−γ0)subscript𝜌𝛾ℓ𝜌ℓ𝛾subscript𝛾01subscript𝛾0\rho_{+}(\gamma,\ell)=\rho\ell\gamma\gamma_{0}/(1-\gamma_{0})italic_ρ start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ) = italic_ρ roman_ℓ italic_γ italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT / ( 1 - italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) and ρ−(γ,ℓ)=ρℓ(1−γ)subscript𝜌𝛾ℓ𝜌ℓ1𝛾\rho_{-}(\gamma,\ell)=\rho\ell(1-\gamma)italic_ρ start_POSTSUBSCRIPT - end_POSTSUBSCRIPT ( italic_γ , roman_ℓ ) = italic_ρ roman_ℓ ( 1 - italic_γ ), respectively, generating a net degradation of recognition and driving the adaptive evolution of recognition sites.
|
Second, the recognition target sequence can change by extension and compression steps, which include one unit of sequence into the recognition site or exclude one unit from recognition(Fig. 2BC). These changes, which affect the architecture of recognition, are assumed to occur at a much lower rate than point mutations, ν≪μmuch-less-than𝜈𝜇\nu\ll\muitalic_ν ≪ italic_μ, and to be generated by external factors independent of the recognition function. Both of these features will emerge as essential for the evolutionary dynamics of complexity emerging in this system.
|
Clearly, the minimal evolutionary model is a broad approximation to the evolutionary dynamics of any specific receptor-target interface. The model neglects many details of actual molecular evolution processes that are not important for conclusions of this paper. Three model features turn out to be crucial for what follows. First, the sequence mutation dynamics does not introduce any bias towards higher complexity. Second, there is a separation of mutational time scales: site complexity changes take place at a much lower rate than recognition site and target mutations (ν≪μmuch-less-than𝜈𝜇\nu\ll\muitalic_ν ≪ italic_μ). Third, because selection on the recognition function depends only the binding affinity phenotype ΔGΔ𝐺\Delta Groman_Δ italic_G, the model does not introduce any explicit fitness benefit of sequence complexity. Nevertheless, complexity can emerge as a collateral of selection for function in a non-equilibrium dynamical pathway – an evolutionary ratchet. We will first characterize the complexity of stationary states in different parameter regimes of the evolutionary model, then derive specific dynamical pathways towards these states.
|
Organisms live in dynamic environments. Changing external signals continuously degrade the fidelity of an organism’s recognition units and generate selective pressure for change. Here we argue that stochastic, adaptive evolution of molecular interactions drives the complexity of the underlying sequence codes. By analytical computation and in silico evolution experiments, we show that recognition sites tuned to a dynamic molecular target evolve a larger code length than the minimum length, or algorithmic complexity, required for function (Fig. 3). The increase of coding length is coupled to a decrease of coding density; that is, individual letters carry a reduced information about the target. Importantly, these shifts also increase the number of mutational paths available for adaptation of recognition and the overall turnover rate of the recognition site sequence. Complex sites evolve in a specific regime of adaptive tinkering, which facilitates the maintenance of recognition interactions in dynamic environments. Moreover, the rapid parsing of fuzzy recognition site sequences increases the propensity for functional refinement or alteration, for example by tuning epistatic interactions within the recognition site or by recruiting additional targets for cooperative binding. We can summarize these dynamics as an evolutionary feedback loop: moving targets, by increasing the molecular complexity of their cognate recognition machinery, improve their own recognition.
|
B
|
For this purpose, we used ATL samples and 77777777 ATAC-seq datasets from 13131313 human primary blood cell types.
|
We next examined ATL cases by inferring the past cell status before infection with HTLV-1 and the current cell status in terms of immunophenotypes compared with normal hematopoietic cells.
|
nearly equally to intronic and intergenic regions (Thurman2012, ). Consistent with these data, as shown in Fig. 1, about 10101010% of the ATAC-seq peaks are overlapped with the TSSs and their surrounding regions, whereas the majority of ATAC-seq peaks (about 85858585%) of healthy CD4+{{}^{+}}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT T cells, ATL cells, and HAM cells reside in intergenic or intronic regions.
|
To ascertain whether the mRNA expression in the ATL cells reflects the characteristics of myeloid cells, we analyzed the RNA-seq data from healthy CD4+{{}^{+}}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT T cells and HTLV-1-infected CD4+{{}^{+}}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT cells from 4 ATL cases (samples 8, 10, 21, 24 in Table 1).
|
As summarized in Table 1, the majority of ATL samples are close to CD4+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT T cells, as expected by the above analysis about the past cell status.
|
D
|
For syncopation tasks, Relative Average Betweenness Centrality was significantly lower in the Mutual condition compared to both Uncoupled (ΔμU,M=0.187Δsubscript𝜇𝑈𝑀0.187\Delta\mu_{U,M}=0.187roman_Δ italic_μ start_POSTSUBSCRIPT italic_U , italic_M end_POSTSUBSCRIPT = 0.187, p=0.0085) and Leader-Follower (ΔμLF,M=0.109Δsubscript𝜇𝐿𝐹𝑀0.109\Delta\mu_{LF,M}=0.109roman_Δ italic_μ start_POSTSUBSCRIPT italic_L italic_F , italic_M end_POSTSUBSCRIPT = 0.109, p=0.042), indicating less centralized probability flow. Additionally, significant differences in Shortest Path Length were found between Uncoupled and Leader-Follower (ΔμU,LF=−0.077Δsubscript𝜇𝑈𝐿𝐹0.077\Delta\mu_{U,LF}=-0.077roman_Δ italic_μ start_POSTSUBSCRIPT italic_U , italic_L italic_F end_POSTSUBSCRIPT = - 0.077, p=0.017) and Uncoupled and Mutual (ΔμU,M=−0.074Δsubscript𝜇𝑈𝑀0.074\Delta\mu_{U,M}=-0.074roman_Δ italic_μ start_POSTSUBSCRIPT italic_U , italic_M end_POSTSUBSCRIPT = - 0.074, p=0.016), suggesting a lower spread of states in the presence of either type of interaction condition. However, unlike the synchronization sessions, this reduction of the spread of the flow reduces the importance of the core states as indicated by the lower Betweenness Centrality.
|
The distance is calculated for each pair of correlation matrices in a given session resulting in a distance matrix for each subject across an entire experimental session, as shown in Fig. 1. Multidimensional scaling (Scikit-learn sklearn.manifold.MDS class Borg and Groenen (2007)) was used to create a 3 dimensional embedding (dimensionality selected using Bayesian Information Criterion) of the correlation matrices as shown in Fig. 1. Discrete brain states were then identified by fitting a Gaussian mixture model (Scikit-learn sklearn.mixture.GaussianMixture class Reynolds et al. (2009)), selecting the number of clusters that minimize Bayesian Information Criteria. Each of those coarse grained states were labeled with symbols. The term symbolic sequence refers to the time sequence of brain states and the term symbol is used to refer to the brain states in those sequences. A schematic representation of the sequence derived for a single experiment is shown in Fig. 1.
|
Brain activity measured at a macroscopic (cm) scale in humans using electroencephalography (EEG) reflects transient quasi-stable patterns that evolve over timeNunez and Srinivasan (2006). An extensive literature characterizes these patterns as functional networks, using correlation, coherence, or mutual information to identify structure in patterns of connectivity between brain regions. One approach to EEG analysis is to identify stable patterns in short windows of time, called microstates, and to characterize the EEG signals as a sequence of microstates Michel and Koenig (2018). By labeling each microstate with a discrete symbol, various approaches based on information theory have been applied to the symbolic dynamics to characterize the evolution of brain states Hutt and Beim Graben (2017).
|
To validate the method presented in Fig. 1, we tested it using a model with three metastable states to demonstrate its ability to accurately capture transitions between different basins of attraction. Specifically, we aimed to show how the method can detect and characterize the topological features of system dynamics. Specifically, we selected a well-studied model of three-state stochastic oscillators Wood et al. (2006), where each unit (illustrated in Fig. 2A) transitions between three states based on probabilistic rules influenced by the number of neighboring units in each state. Each oscillator can be in one of the three states labeled 1, 2, and 3. At a given time point the oscillator can change its state from state i𝑖iitalic_i to j𝑗jitalic_j according to the transition rate gijsubscript𝑔𝑖𝑗g_{ij}italic_g start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT defined as:
|
Our approach to defining symbols departs from other approaches Beim Graben et al. (2016) by employing Gaussian Mixture Models (GMM) to identify the symbols by focusing on regions with a high density of points. A particular characteristic of GMM is its capability of modeling these dense regions as a mixture of Gaussian distributions, capturing the underlying probability density effectively. This probabilistic approach allows GMM to handle clusters with varying shapes and sizes and provides a likelihood to evaluate the model using Bayesian Information Criterion (BIC). Therefore, GMM provides an appropriate method for identifying these high-density regions which can be associated with basins of attraction.
|
D
|
However, protein-RNA binding is highly flexible. Some proteins bind with RNA through canonical regions while others bind with RNA through intrinsically disordered regions - protein domains characterized by low sequence complexity and highly variable structures (Seufert et al. 2022), making it challenging to model the mechanism.
|
We first evaluate our model’s performance on PRA310 and PRA201. We divide the baseline methods into sequence- and structure-based. As illustrated in Table 1, the scratch version of CoPRA reaches the best performance on the PRA310 dataset. IPA is the best-performed model without LMs, and we replace the sequential input of IPA with the embeddings from LMs, improving its performance with 0.19 on PCC. Moreover, most methods with LM embedding as input perform better than others, indicating the great power of combining pre-trained unimodal LMs for affinity prediction. We then pre-train our model with PRI30k, increasing the overall performance significantly on both datasets. On PRA310, CoPRA gets an RMSE of 1.391, MAE of 1.129, PCC of 0.580, and SCC of 0.589, much better than the second-best model CoPRA (scratch). The PredPRBA and DeepNAP only provide web servers and support protein-RNA pair affinity prediction, and we compared the methods on the PRA201 dataset with them. Although at least 100 samples in PRA201 appear in their training set, their performance on PRA 201 is significantly lower than that they reported, indicating the less generalization ability of these methods. This phenomenon can be explained by the experiment of PRdeltaGPred (Hong et al. 2023) that removes worst-performed samples, as shown in Appendix E. Moreover, we observe a consistent performance improvement of most models from PRA310 to PRA201, indicating that PRA310 is more comprehensive and challenging. The experiments in PRA310 and PRA201 show CoPRA’s ability to precisely predict the binding affinity, especially when equipped with the proposed bi-scope pre-training.
|
Several sequence- or structure-based machine learning-based methods have been applied to predict protein-RNA binding affinity. For example, PNAB (Yang and Deng 2019b) is a stacking heterogeneous ensemble framework based on multiple machine learning methods, e.g. SVR and Random Forest. They manually extract different biochemical features from the protein and RNA sequences. DeePNAP (Pandey et al. 2024) is another sequence-based method, leveraging 1D Convolution networks for feature extraction. PredPRBA and PRdeltaGPred (Deng et al. 2019; Hong et al. 2023) employ interface structure features for better prediction. Besides, PRA-Pred (Harini, Sekijima, and Gromiha 2024b) is a multiple linear regression model, which utilizes protein-RNA interaction information as features in addition to the protein and RNA information. These studies demonstrate that the sequence feature of RNA/protein, and the interface structure feature both contribute to more accurate prediction. However, most of them only employ part of the information, and it is demanding to develop a method to leverage both sequence and interface structure information.
|
Several computational methods have been proposed for protein-RNA binding affinity prediction, including sequence-based and structure-based methods. The sequence-based approaches process the protein and RNA sequence separately with different sequence encoders (Yang and Deng 2019a; Pandey et al. 2024), and subsequently model the interactions. However, their performance is often limited because the binding affinity is mainly determined by the binding interface structure (Deng et al. 2019). Other recent methods are structure-based (Hong et al. 2023; Harini, Sekijima, and Gromiha 2024a), focusing on extracting structural features at the binding interface, such as energy and contact distance. Based on the extracted features, they developed structure-based machine-learning approaches for affinity prediction. However, these methods are highly dependent on feature engineering with limited generalization ability on new samples due to the limited development dataset size.
|
Learning from multiple modals can provide the model with multi-source information of the given context (Huang et al. 2021). Multi-modal learning achieves impressive performance improvement compared to its single-modal counterparts and brings new applications(Luo et al. 2024; Li et al. 2023). Contrastive learning is one efficient unsupervised way to align multi-modal representation to the same semantic space. CLIP (Radford et al. 2021) used an in-batch contrastive strategy to train visual encoders with the text encoders. BLIP-2 (Li et al. 2023) introduces a lightweight QFormer for visual-language pretraining with frozen image encoders and LLMs. In the field of protein, many efforts have been made to integrate the 3D structure information into PLMs. LM-design (Zheng et al. 2023) adds a structure adapter to ESM-2, enabling the structure-informed PLMs on conditional protein design. Recently, SaProt (Su et al. 2024) and ESM-3 (Hayes et al. 2024) pretrain the PLM with protein sequence and its structural information, increasing the models’ overall performance. Existing multi-modal PLMs were trained with the protein structure and sequence modals. It is still an open problem for combining multiple biological modals (e.g. protein and RNA) with complex structure information for complex-level interaction tasks.
|
C
|
The FocusPath dataset [13] consists of 864 image patches, each with a resolution of 1024×1024102410241024\times 10241024 × 1024 pixels in sRGB format, capturing varying degrees of focus. These patches are cropped from nine distinct whole slide images (WSIs), with 16 different z-levels employed to simulate various out-of-focus conditions. The tissue slides selected for the dataset feature diverse color staining techniques, ensuring that the dataset reflects a wide range of histopathological staining variations. The dataset involves image patches with varying degrees of focus, translating to different levels of blur. This dataset serves as an excellent tool for assessing our metric’s capability to distinguish between blurry and clear images. Moreover, it allows us to evaluate the monotonicity of our metric with respect to the degree of blur, as it should assign higher values to images with greater blurriness. We train our network only on images with z-level 00 (which are blur free) and evalaute it on all the other images with higher degree of blur. We add salt and pepper noise and rectangular patch noise to the 00 z-level images for our experiments involving other kinds of noise. We used images of 100×100\times100 × resolution from BreakHis [14] dataset for generating synthetic images for our experiments to test RL2 for diffusion noise.
|
The FocusPath dataset [13] consists of 864 image patches, each with a resolution of 1024×1024102410241024\times 10241024 × 1024 pixels in sRGB format, capturing varying degrees of focus. These patches are cropped from nine distinct whole slide images (WSIs), with 16 different z-levels employed to simulate various out-of-focus conditions. The tissue slides selected for the dataset feature diverse color staining techniques, ensuring that the dataset reflects a wide range of histopathological staining variations. The dataset involves image patches with varying degrees of focus, translating to different levels of blur. This dataset serves as an excellent tool for assessing our metric’s capability to distinguish between blurry and clear images. Moreover, it allows us to evaluate the monotonicity of our metric with respect to the degree of blur, as it should assign higher values to images with greater blurriness. We train our network only on images with z-level 00 (which are blur free) and evalaute it on all the other images with higher degree of blur. We add salt and pepper noise and rectangular patch noise to the 00 z-level images for our experiments involving other kinds of noise. We used images of 100×100\times100 × resolution from BreakHis [14] dataset for generating synthetic images for our experiments to test RL2 for diffusion noise.
|
For the experiment to filter out low-quality noisy image patches from good quality ones, we used HistoROI dataset [11]. HistoROI dataset is developed to segment WSIs into six key classes: epithelium, stroma, lymphocytes, adipose, artifacts, and miscellaneous. Artifacts in this dataset include out-of-focus areas, tissue folds, cover slips, air bubbles, pen marks, and areas with extreme over- or under-staining, which were carefully labeled to aid in quality control for pathology image analysis. We trained our network using 1500 artifact-free patches from epithelium and stroma classes. We then use this network to classify 474 artifact patches from 474 clean patches.
|
Additionally, a quality metric trained on real patches of high quality can also be used to filter out low-quality patches from a whole slide image while training and testing deep learning pipelines for weakly supervised learning in histopathology [11].
|
In the experiment for filtering out low-quality patches with artifacts from high quality patches, our network trained only on clean patches gives an AUC score of 0.76. Even without being trained on the artifacts, the model is able to identify clean patches from noisy ones based on the their likelihood.
|
B
|
Our study highlights the potential benefits of improved connectivity between cities, particularly regarding public health outcomes. Increased connectivity is likely associated with a combination of socioeconomic advantages, better access to healthcare, and more effective public health interventions. These benefits are especially evident in smaller cities and those with moderate levels of connectivity, where enhanced commuting networks may mitigate disease incidence. However, as cities expand and connectivity intensifies, the challenges of increased disease transmission may outweigh the benefits, particularly for diseases that spread more easily through human contact.
|
Here we bridge this gap by investigating the effect of inter-city interactions on the association between population size and the number of cases for seven infectious diseases across Brazilian cities. To do so, we use the commuting network among cities as a proxy for inter-city interactions, combined with a general scaling framework based on the economic theory of production functions [29], which has proven useful in studies of urban carbon dioxide emissions [30] and urban wealth [27]. This approach allows us to describe the number of disease cases as a function of both population size and the strength of inter-city interactions, modeled by the total number of commuters (the weighted total degree of a city in the commuting network). We show these models significantly outperform the traditional urban scaling model across the seven disease types, particularly by reducing bias in large urban areas. Additionally, we assess the impact of proportional changes in population and total number of commuters on disease cases by calculating an elasticity of scale derived from our models for individual cities. This elasticity depends on the product of population and number of commuters and predicts the existence of distinct scaling regimes, depending on whether this product exceeds specific thresholds. Overall, the majority of cities display decreasing returns to scale in relation to changes in population and the number of commuters, with the proportion of cities showing this trend ranging from 95% to 66%. This implied that a 1% increase in both quantities is associated with less than a 1% increase in disease cases for most Brazilian cities. However, increasing returns to scale are also observed for all disease types, with percentages ranging from 0.4% to nearly a quarter of Brazilian cities. In these cities, a 1% increase in population and commuters correlates with more than a 1% increase in disease cases. Interestingly, we also identify a few small cities that exhibit negative elasticity of scale for certain disease types, indicating that a small proportional increase in population and commuters is associated with a decrease in the number of disease cases for these cities. We also investigate the individual effects of population and commuters on disease cases, finding that most cities exhibit a less-than-proportional response in disease cases to changes in either population or commuter numbers; however, an increase in commuters is associated with a decrease in syphilis and pertussis cases in most cities, as well as in a significant number of cities for tuberculosis and viral hepatitis. Finally, we compare the relative impact of both variables, revealing that changes in population generally affect disease cases more than proportional changes in the total number of commuters.
|
Our study is not without limitations. While our findings provide evidence supporting the suitability of the Cobb-Douglas and translog models in better describing the data, these models lack mechanistic explanations that directly link the holistic concept of interacting cities to their specific functional forms or to the broader theoretical framework of production functions from which they are derived. Additionally, despite the application of a regularization approach to estimate model parameters, the strong correlations between population size and the number of commuters may constrain the ability to disentangle their individual effects on disease cases. To address these limitations, future research could explore mechanistic foundations, explicitly incorporate correlations among predictors into model assumptions, and account for zero disease counts in cities. Recent advances in urban scaling — such as generative processes based on the distribution of tokens among individuals [50, 51, 25, 52] — offer promising avenues for addressing these challenges.
|
We fit the urban scaling (Eq. 1), Cobb-Douglas (Eq. 2), and translog (Eq. 3) models to each of the seven disease types. For the urban scaling model, we estimate the value of βNsubscript𝛽𝑁\beta_{N}italic_β start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT using the standard least-squares method applied to the relationship between logY𝑌\log Yroman_log italic_Y and logN𝑁\log Nroman_log italic_N (see Supplementary Figure S1 for visualizations of the adjusted scaling laws). In contrast, estimating the parameters of the Cobb-Douglas and translog models using ordinary least-squares is not recommended due to the strong correlations between logN𝑁\log Nroman_log italic_N and logS𝑆\log Sroman_log italic_S, as well as between these terms and their product in the case of the translog model. This effect, known as multicollinearity [39], occurs when two or more predictor variables in a regression model are highly correlated, leading to unstable parameter estimates and inflated standard errors. This instability arises because the ordinary least-squares method relies on the inversion of the Gram matrix G𝐺Gitalic_G, defined as the product of the transpose of the regressor matrix and the regressor matrix. In the presence of strong correlations among predictors, this matrix becomes nearly singular, making its inversion highly sensitive to small perturbations in the data. To address this issue, we use the ridge regression approach [40] to estimate the parameters of the Cobb-Douglas and translog models. Ridge regression mitigates the effects of multicollinearity by adding a constant λ𝜆\lambdaitalic_λ to the diagonal elements of G𝐺Gitalic_G, stabilizing the inversion of G𝐺Gitalic_G, and reducing the sensitivity of the regression coefficients to multicollinearity. This modification is equivalent to identifying the best-fitting parameters by minimizing the residual sum of squares with an added regularization term proportional to the sum of squares of the model parameters, where λ𝜆\lambdaitalic_λ is the proportionally constant. Thus, in addition to the model parameters, the hyperparameter λ𝜆\lambdaitalic_λ needs to be estimated from the data. Following standard practice, we estimate this hyperparameter by minimizing the mean squared error using a leave-one-out cross-validation strategy. Furthermore, to ensure uniform regularization across all independent variables, we standardized the values of logN𝑁\log Nroman_log italic_N and logS𝑆\log Sroman_log italic_S before determining the optimal hyperparameter value (see Refs. [30, 27, 41] for more details). For the numerical implementation of this approach, we rely on the Python module scikit-learn [42].
|
Figure 2 compares the performance of the three models in predicting the number of cases of HIV/AIDS, meningitis, and influenza. A simple visual inspection reveals that urban scaling models provide the poorest predictions (Fig. 2A), significantly underestimating the number of disease cases in large cities. The Cobb-Douglas models (Fig. 2B) improve the predictions by slightly reducing this bias. However, it is the translog models (Fig. 2C) that offer the most accurate predictions, markedly reducing the underestimation of disease cases in large cities. This visual assessment is corroborated by the coefficients of determination (R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT), shown in the figures, which attain the highest values for the translog models. Similar conclusions are drawn from the Bayesian information criterion (BIC) and Akaike information criterion (AIC) [43], which account for the varying number of parameters among the urban scaling, Cobb-Douglas (one more parameter than the urban scaling), and translog models (one more parameter than the Cobb-Douglas). Specifically, the insets of Figures 2B and 2C compare the BIC and AIC values across the three models, demonstrating that the translog model yields the minimum BIC and AIC values, indicating not only the best predictions but also the most parsimonious fit for our data.
|
B
|
To further assess GAN-TAT’s applicability, we grouped genes by their probability percentiles as assigned by the model and mapped them to Tclin genes to calculate overlaps (Figure 2A). A Fisher’s exact test was performed on these overlaps. Additionally, we compared pathway enrichment scores of the top 5%percent55\%5 % predicted genes for GO:BP pathways against those of Tclin genes (Figure 2B) [18]. A 15-point moving average is applied to reduce noise [21].
|
We trained nine models using three different embedding algorithms: Node2Vec, LINE (Large-scale Information Network Embedding) [14, 36], and ImGAGN-GraphSAGE. Each embedding algorithm was paired with three different classifiers: Decision Tree (DT), Random Forest (RF), and XgBoost[33, 3, 6]. All models were trained on an Apple M3 Pro CPU. Hyperparameters tuning for the ImGAGN-GraphSAGE model was done manually. Node2Vec and LINE were paired with the XgBoost classifier for evaluation. Both these embeddings and all classifiers were tuned using grid search with 5-fold cross-validation and evaluated using the mean AUC-ROC score [27, 15]. Further details on hyperparameters and training are available in the GitHub repository. Results are summarized in Table 1.
|
The efficacy of various GAN-TAT configurations and frameworks, based on different embedding algorithms, was evaluated using three distinct label sets sourced from Pharos: Tclin genes, Tclin targets for pancreatic intraductal papillary-mucinous neoplasm, and Tclin targets for acute myeloid leukemia[24]. The PIN and the extended feature set used are the same as described in Sections 2.1 and 2.2.
|
This study aims to address existing constraints in the utilization of the Protein Interaction Network (PIN) for identifying druggable genes. We propose a novel framework, GAN-TAT (Generative Adversarial Network-based Target Assessment Tool), which incorporates a latent representation of the PIN for each gene, serving as a unique feature in a machine-learning model. This representation is generated by the ImGAGN algorithm (Imbalanced Network Embedding via Generative Adversarial Graph Networks), specifically designed to tackle the challenge of imbalance [28]. Comparative analyses show that GAN-TAT outperforms architectures based on traditional embedding models in efficacy metrics. Validations using three distinct label sets from the Pharos database confirm its effectiveness. Further analysis demonstrates that the gene predictions made by GAN-TAT strongly correlate with clinically validated targets, underscoring its potential as a robust methodological approach for future research and practical applications in pharmacogenomics.
|
Our observations indicate that models based on the ImGAGN-GraphSAGE framework consistently outperform those utilizing other embedding algorithms, particularly in label sets characterized by higher imbalance. Among the classifiers evaluated, XgBoost emerged as the most effective across all embedding methods. The data shows that the GAN-TAT configuration used in this study (ImGAGN-GraphSAGE + XgBoost) stands out as the most superior, with high AUC-ROC scores of 0.951, 0.919, and 0.925 across the datasets for Tclin, pancreatic neoplasm, and leukemia respectively (Table 1).
|
D
|
Rapid Experimental Validation - Data collection and validation are essential parts of the machine learning pipeline. Due to advancements in sequencing technology, databases such as PDB have grown rapidly in size, whilst labels for some applications still remain limited. The ability to rapidly determine protein characteristics through experimentation can greatly enhance the ability to address task specific prediction problems in bioinformatics. Experimentation is particularly important in design problems where in vitro validation is required to guarantee results. The equivalent process in the CV and NLP domain is qualitative feedback from human reviewers for generative results, which is considerably simpler and cheaper to conduct at scale. Advancements in laboratory experimentation and computational bioinformatics are therefore mutually beneficial, and greater improvements can be achieved through rapid progress in both areas.
|
Protein folding is a dynamic process that releases energy and are driven by hydrophilic/hydrophobic forces, Van der Walls forces, and conformational entropy [7, 63]. In most cases, the structures stabilize at minimum free entropy, although it is possible that they stabilize at a higher energy level because they are unable to dynamically reach a lower configuration. There are sometimes variations in the folds, and it is possible that some sequences stabilize to different structures depending on their external environment. On the other end of the spectrum, proteins with the same structures sometimes may have slightly different sequences due to evolutionary mutation between and within species.
|
Protein design is an important component of tasks such as bioengineering and drug design, but remains highly challenging. Different sequences may fold into the same target structure, making it a ill-poised problem, and experimental validation in vitro is required to ensure generated sequences can maintain stable folds in natural environments, which limits the ability to iterate rapidly. Unlike structural and functional prediction problems that have standardized evaluation metrics, accurate evaluation and comparison between different design methodologies remain difficult. Currently, a large proportion of structures shown to be stable and valid through in silico experiments do not fold when synthesized. Thus, attempts to use in silico methods, latent energy analysis, or metrics such as recomposition scores for comparisons may not be entirely accurate. In order to truly address this issue, a better understanding of the deficiencies in current in silico procedures is required, which also implies a better understanding of folding dynamics and fold stability. Similarly, more focus could be placed on the percentage of in silico folds that achieve stable structures when synthesized in vitro.
|
Dynamic Modelling of Proteins - Standardized challenges and benchmarks such as CASP have played an important role in accelerating our knowledge of the folding process. However, it is important to remember that the challenge is only restricted to a specific formulation of the problem. Current collection of structural information used in CASP and most experiments are based on crystallography, where the protein is transformed into crystal form through vapor diffusion. This is an important detail because cells are not crystalline, and experimental structures obtained through crystallography do not truly reflect structure in its natural environment. The actual folding process itself is also a dynamic process that is not well understood. To give some examples, there are still conflicting view for example on whether the process is multi-pathway or path-dependant [28], which has implications on where they stabilize on the energy landscape. Some folding processes also depend on chaperone proteins that enable it to make specific folds. Even after stabilization, some proteins may still maintain flexible backbones which make them especially difficult to capture and model. Studying the full dynamic nature of proteins is highly complex, and there is still room for improvement in this field.
|
Despite this, there are still important problems in structure prediction that remain to be addressed. Essential life functions tend to be carried out through multi-protein complexes, where interactions between multiple structures drive vital processes. Individual protein structures within these complexes remain challenging to model and are outside the scope of CASP. Also, the current accuracy of Alphafold2 in RMSD terms of 1.6Å, which is highly impressive but still too large for direct application in drug design, where confidence of individual atoms needs to be within 0.3Å. Sub-classes of structure prediction problems, such as loop modelling, which are not as well-defined, also have room for improvement. Thus, there remains open topics for research in the field of structural bioinformatics for proteins.
|
C
|
VNN architecture and training. The VNN model was pre-trained on a healthy population to glean information about healthy aging. To facilitate ΔΔ\Deltaroman_Δ-Age that is transparent and methodologically interpretable, we used a multi-layer VNN model that yielded representations from the input cortical thickness features at the final layer, such that the unweighted mean of these representations formed the estimate for chronological age. Details on how this choice of architecture leads to anatomically interpretable ΔΔ\Deltaroman_Δ-Age are discussed subsequently.
|
Explainability of ΔΔ\Deltaroman_Δ-Age. The representations are learned by the VNN, in part, by transforming the input data according to the eigenspectrum of the anatomical covariance matrix [11]. By leveraging this fact, we characterize the explainability of ΔΔ\Deltaroman_Δ-Age by evaluating the inner products between the regional residuals derived from representations learned by the VNN and the eigenvectors of the anatomical covariance matrix. We anticipate to observe significant differences in terms of these inner product metrics for disease groups and healthy populations. These experiments will elucidate how VNN processed the cortical thickness information from disease groups differently relative to the healthy population, thus lending explainability to the evaluation of the downstream statistic of ΔΔ\Deltaroman_Δ-Age in different cohorts.
|
The VNN model consisted of two layers and yielded a representation from the input cortical thickness data via transformation that was dictated by the anatomical covariance matrix. For training this VNN model, we leveraged the cortical thickness features from the healthy control population in the publicly available OASIS-3 dataset [23]. The healthy population in the OASIS-3 dataset consisted of 631631631631 individuals (age = 67.71±8.37plus-or-minus67.718.3767.71\pm 8.3767.71 ± 8.37 years, 367367367367 females).
|
Explainability of ΔΔ\Deltaroman_Δ-Age. Next, we analyzed the inner products between the regional residuals derived from the representations learned by VNNs and the eigenvectors of the anatomical covariance matrix. The eigenvectors of the anatomical covariance matrix were organized from 00 to 67676767, with the eigenvector 00 associated with the largest eigenvalue and the 67676767-th eigenvector associated with the smallest eigenvalue. The inner product metrics were significantly different (ANOVA, p𝑝pitalic_p-value <0.0001absent0.0001<0.0001< 0.0001) between the AD group and HC groups for the eigenvectors 00, 1111, 2222, and 6666 of the anatomical covariance matrix (Fig. 5a). Thus, the VNN model processed the cortical thickness features for AD group significantly different relative to the HC group leading to distinct distributions in ΔΔ\Deltaroman_Δ-Age in Fig. 3a, and these differences were dictated by variations in how the VNN exploited the eigenvectors of the anatomical covariance matrix for AD and HC groups.
|
4RTNI. This dataset was collected as part of the 4-Repeat Tauopathy Neuroimaging Initiative (4RTNI) and used similar MRI acquisition and clinical assessments as the NIFD dataset. This dataset constituted of 59 individuals diagnosed with progressive supranuclear palsy (PSP; age = 70.79±7.65plus-or-minus70.797.6570.79\pm 7.6570.79 ± 7.65 years, 32323232 females) and 45454545 individuals diagnosed with corticobasal syndrome (CBS; age = 66.71±6.75plus-or-minus66.716.7566.71\pm 6.7566.71 ± 6.75 years, 24242424 females). CBS and PSP disorders are among the most common disorders within the broader family of APD [22]. In this paper, we denote the combined cohort of CBS and PSP as APD group. Cortical thickness features curated according to the Desikan-Killiany atlas were derived from T1w MRI using similar pre-processing steps as that for the NIFD dataset. Furthermore, the HC group from the NIFD dataset was considered as the healthy control group for 4RTNI dataset due to the similarity in the MRI acquisition methods of NIFD and 4RTNI datasets.
|
B
|
For the temporal model, E∗=(u∗,v∗)superscript𝐸superscript𝑢superscript𝑣E^{*}=(u^{*},v^{*})italic_E start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = ( italic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_v start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) is locally asymptotically stable when C1=−(a11+a22)>0subscript𝐶1subscript𝑎11subscript𝑎220C_{1}=-(a_{11}+a_{22})>0italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = - ( italic_a start_POSTSUBSCRIPT 11 end_POSTSUBSCRIPT + italic_a start_POSTSUBSCRIPT 22 end_POSTSUBSCRIPT ) > 0 and C2=(a11a22−a12a21)>0subscript𝐶2subscript𝑎11subscript𝑎22subscript𝑎12subscript𝑎210C_{2}=(a_{11}a_{22}-a_{12}a_{21})>0italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = ( italic_a start_POSTSUBSCRIPT 11 end_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT 22 end_POSTSUBSCRIPT - italic_a start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ) > 0 hold. Here, we apply heterogeneous perturbation around E∗superscript𝐸E^{*}italic_E start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT to obtain the criterion for instability of the spatio-temporal model. For the case of two-dimensional diffusion, we perturb the homogeneous steady state of the local system (13) around (u∗,v∗)superscript𝑢superscript𝑣(u^{*},v^{*})( italic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_v start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) by
|
Mk[u1v1]≡[a11−d1k2−λa12−ξu∗k2a21+ηv∗k2a22−d2k2−λ][u1v1]=[00],subscriptM𝑘matrixsubscript𝑢1subscript𝑣1matrixsubscript𝑎11subscript𝑑1superscript𝑘2𝜆subscript𝑎12𝜉superscript𝑢superscript𝑘2subscript𝑎21𝜂superscript𝑣superscript𝑘2subscript𝑎22subscript𝑑2superscript𝑘2𝜆matrixsubscript𝑢1subscript𝑣1matrix00\textbf{M}_{k}\begin{bmatrix}u_{1}\\
|
J=(a11a12a21a22),Jmatrixsubscript𝑎11subscript𝑎12subscript𝑎21subscript𝑎22\ \textbf{J}=\begin{pmatrix}a_{11}&a_{12}\\
|
Jk[u1v1]≡[a11−d1k2−λa12a21a22−d2k2−λ][u1v1]=[00],subscriptJ𝑘matrixsubscript𝑢1subscript𝑣1matrixsubscript𝑎11subscript𝑑1superscript𝑘2𝜆subscript𝑎12subscript𝑎21subscript𝑎22subscript𝑑2superscript𝑘2𝜆matrixsubscript𝑢1subscript𝑣1matrix00\textbf{J}_{k}\begin{bmatrix}u_{1}\\
|
(uv)=(u∗v∗)+ϵ(u1v1)exp(λt+i(kxx+kyy)),matrix𝑢𝑣matrixsuperscript𝑢superscript𝑣italic-ϵmatrixsubscript𝑢1subscript𝑣1𝜆𝑡𝑖subscript𝑘𝑥𝑥subscript𝑘𝑦𝑦\displaystyle\begin{pmatrix}u\\
|
D
|
(a) Antigen Presentation via APCs to activate T cells. Antigens are up-taken by the APCs and then bind to the MHC. Subsequently, the pMHC complex displayed on APCs can bind to some TCRs on T cells.
|
From a biological perspective, cellular immunity is vital to health by recognizing and eliminating pathogen-infected and abnormal cells.
|
(b) Recognition of Antigens by T cells. All cells present some peptides via the pMHC. Certain peptides can be recognized by T cells through the pMHC-TCR interaction, leading to their elimination by T cells.
|
(a) Antigen Presentation via APCs to activate T cells. Antigens are up-taken by the APCs and then bind to the MHC. Subsequently, the pMHC complex displayed on APCs can bind to some TCRs on T cells.
|
antigen presentation and antigen recognition, essential to adaptive immunity. Antigen-presenting cells (APCs) activate T cells through antigen presentation, while T cells recognize and eliminate abnormal cells via antigen recognition.
|
B
|
We show that, remarkably, slow but non-zero migration can enhance and accelerate the fluctuation-driven eradication of resistant cells (m=10−4−10−3𝑚superscript104superscript103m=10^{-4}-10^{-3}italic_m = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT - 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT in Fig. 4, Extended Data Fig. 8, and Supplementary Section S3 Movie 2).
|
Environmental variations, spatial structure, cellular migration, and fluctuations are all ubiquitous key factors influencing the evolution of cooperative antimicrobial resistance.
|
Understanding the joint influence of spatial structure and environmental variability on the evolution of microbial populations is therefore an important research avenue with many open questions.
|
Our results therefore demonstrate the critical and counterintuitive role of spatial migration that, jointly with environmental variability and demographic fluctuations, determines the maintenance or extinction of cooperative antimicrobial resistance.
|
The dynamic degradation of the drug can however play a critical role in the evolution of cooperative AMR, as shown in Ref. [21], where the fragmentation of the metapopulation into isolated demes enhances the maintenance of resistance.
|
C
|
Furthermore, side chain changes are identified on the P-I-F motif, the NPxxY motif, and the DRY motif, three receptor motifs that are known to undergo distinct rotamer changes in the transition from inactive to active receptor states.Katritch et al. (2013)
|
Namely, rotameric state changes in the backbone torsions of transmembrane helix TM6, proximal to Asp114, are coupled to the protonation state changes of Asp114.
|
The recognition of receptor regions where conformational changes are associated with activation and signaling suggests that the Asp114 protonation state and GPCR activation are intertwined.
|
The SSI values calculated for each residue reveal those parts of the receptor that signal information about the protonation state of Asp114 by coupled conformational state changes between the rotamer states and the Asp114 protonation state (Fig. 7).
|
This analysis highlights a concerted behaviour of water binding sites and TM6, whereby state changes to both are indicative of the protonation state of Asp114. It shows how the combined analysis of multiple different features and a comprehensive visualization help to find interrelations within a receptor and discover signaling pathways.
|
B
|
The motivation for this experimental design was to investigate the cognitive processing of speech events. Going from paradigm 1 to 3, the experimental complexity was increased in terms of task difficulty and practicality. In this way, we wished to see how the ERP component is regulated by the distinctiveness of the event and the selective listening mechanism.
|
The stimuli for paradigms 3 and 4 were generated in a similar way. Text scripts of the stories were first made and then used for audio synthesis using the text-to-speech tool. The audio files were then sliced into different snippets and normalized to have the same RMS amplitude. The text scripts for paradigm 3 were created by native Danish speakers. For experiment 4, the text scripts were sourced from two books (The Hobbit and Northern Lights) and excerpts from Danish radio broadcast news covering politics, society, education, sports, and social networks. The word onset times, to be used in the subsequent ERP analysis, was extracted from the text-to-speech tool.
|
This paradigm was designed to be similar to the conventional oddball paradigm in which subjects were presented with a sequence of two different classes of spoken words: animal names and cardinal numbers, or color names and cardinal numbers from a loudspeaker situated one meter in front of the subject. The animal names and color names were predefined as the target events, while the cardinal numbers were the non-target events. The target and non-target events in this context play similar roles as oddball and standard events in the classical oddball paradigm. However, unlike conventional oddball paradigms, the discrimination between the target and non-target events was not based on the physical attributes of the stimuli but rather on the semantics of the stimuli. The stimuli for each trial were generated by randomly mixing twenty target and non-target events, with the number of target events between 2 and 5 and the first two being non-targets. The number of target events was chosen based on two criteria: 1) to provide enough data for analysis, and 2) not too many since less probable events would produce a larger cognitive response. The distance between two consecutive events was random and uniformly distributed between 0.8 and 1.2 seconds, but the total length of the trial was kept at 20 seconds. In each trial, the subject was asked to pay attention to the target events and passively count them. At the end of each trial, the subject reported the number of target events and received feedback on their accuracy. The counting task and feedback were used to encourage the subject to remain engaged in the task. There were sixteen trials in this paradigm. The target events in the first trials were animal names and the target events in the last eight trials were color names.
|
In paradigm 3, the setup was similar to the setup of paradigm 2. The subject was presented with two competing streams from the same two speakers as in paradigm 2. However, in this case, the stimuli in each speaker were not sequences of spoken words but snippets of different stories and each snippet had a duration of approximately 20 seconds. Each trial had one class of words predefined as target words. For instance, a target class could be human names. In each trial, the subject was asked to pay attention to only one of the two streams and focus on the target words of that stream. At the end of the trial, the subject answered a question about the target words and received feedback. There were 4 classes of target words: animal names, human names, color names, and plant species, distributed over 5 different stories. The story of the attended stream in each trial continued from where it ended in the previous trial. This made the stream easier to follow and attend. There were 20 snippets in total. Each snippet appeared twice in two different trials: one time as the attended stream and one time as the unattended stream. The attended stream was also randomized and balanced between the left and right speakers.
|
The stimuli of all paradigms were in Danish and were synthesized using the Google Text-to-Speech tool v2.11.1 [36]. The voice configuration was randomly selected between da-DK-Wavenet-A (female) and da-DK-Wavenet-C (male) to generate each snippet in paradigms 3 and 4. In the end, there were 14 out of 20 male voice snippets in paradigm 3 and 24 out of 40 male voice snippets in paradigm 4. Each voice configuration was selected to generate every single word in paradigms 1 and 2 once. The speed was set at 0.85 and the authenticity was verified by native Danish people to make sure that the pronunciation was as natural and clear as possible. In this process, minor changes were made to the text to increase speech quality. The target words in paradigms were selected to be short ( 1 or 2 syllables) to resemble the oddball events. The details of stimuli generation are as below:
|
D
|
The relative fold-change in replication speed is defined as f(Tr)/feq−1𝑓subscript𝑇𝑟subscript𝑓𝑒𝑞1f(T_{r})/f_{eq}-1italic_f ( italic_T start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) / italic_f start_POSTSUBSCRIPT italic_e italic_q end_POSTSUBSCRIPT - 1 and computed as described above. Eq. 2 reduces to τsτf>1+11−μsubscript𝜏𝑠subscript𝜏𝑓111𝜇\frac{\tau_{s}}{\tau_{f}}>1+\frac{1}{1-\mu}divide start_ARG italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_ARG start_ARG italic_τ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_ARG > 1 + divide start_ARG 1 end_ARG start_ARG 1 - italic_μ end_ARG for the double delta and approximates the boundary between resets being beneficial or useless to fitness. It is computed numerically and shown in Fig. 5(c)iii: it qualitatively matches the boundary region of the relative fold-change in replication speed.
|
All-in-all, p1 replication activity and substitution mutation rate have been measured for 213 unique mutant DNA polymerases, as reported in Table S2 from [47]. We plotted and repurposed this data to investigate the relationship between speed and accuracy in Fig. 3. See Extended Fig. E2 and Supplementary Information for more details.
|
Results for other distributions are shown and discussed in the Supplementary Information and in Extended Fig. E6.
|
Real systems can show a distribution of replication times Peq(Trep)superscript𝑃𝑒𝑞subscript𝑇repP^{eq}(T_{\rm{rep}})italic_P start_POSTSUPERSCRIPT italic_e italic_q end_POSTSUPERSCRIPT ( italic_T start_POSTSUBSCRIPT roman_rep end_POSTSUBSCRIPT ) for numerous mechanistic reasons, ranging from kinetic traps in self-assembly, stalling in DNA replication or other frustrated states along some trajectories x(t)𝑥𝑡x(t)italic_x ( italic_t ) but not others [66, 68, 69]. We studied several families of replication time distributions Peq(Trep)superscript𝑃𝑒𝑞subscript𝑇repP^{eq}(T_{\rm{rep}})italic_P start_POSTSUPERSCRIPT italic_e italic_q end_POSTSUPERSCRIPT ( italic_T start_POSTSUBSCRIPT roman_rep end_POSTSUBSCRIPT ); we find that some distributions never show order-through-speed while others do so in specific regimes. See Fig. 5c and Extended Data Fig. E6.
|
Figure E6: Resets are beneficial only for wide distributions of replication times.(i) The residual y=(LHS−RHS)𝑦LHSRHSy=(\rm{LHS-RHS})italic_y = ( roman_LHS - roman_RHS ) of the inequality in Eq. 2 is plotted as function of reset time Trsubscript𝑇𝑟T_{r}italic_T start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and for different values of variance — see colorbar: when y>0𝑦0y>0italic_y > 0, resets are beneficial, i.e. increase replication speed. We find that y>0𝑦0y>0italic_y > 0 for some choices of Trsubscript𝑇𝑟T_{r}italic_T start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT for the Log-normal and Fréchet distribution. (ii) The fold-change in replication speed (Eq. E5) is shown for different choices of variance parameters and for different probability distributions of replication times: Normal, Weibull, Log-normal, Fréchet and Gumbel. The order ΔS∗Δsuperscript𝑆\Delta S^{*}roman_Δ italic_S start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT with the optimal demon is computed as in Eq. 3 imposing Eq. E6, which sets the constraint for an optimal demon (f∗,Tr∗)superscript𝑓superscriptsubscript𝑇𝑟(f^{*},T_{r}^{*})( italic_f start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_T start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ). See the Materials and Methods and Supplementary Information for the calculation of how order ΔSΔ𝑆\Delta Sroman_Δ italic_S is defined from the distribution of times, leading to Eq. 3, which shows that the order of trajectories is bound by the order of times. In the case where resets provide a fitness benefit, for Log-normal and Fréchet, the optimal reset time is shown in the plots (grey, right y-axis). We fix the mean of all distributions to unity, but for the plots with the Fréchet distribution in (ii), see the Supplementary Information for more details.
|
B
|
The current study aimed to apply Neural CDEs to model the disease progression of pulmonary fibrosis and Alzheimer’s Disease using multimodal (structural and image data) irregularly sampled data.
|
Neural CDEs[10] are a variant of NDEs designed to work specifically with irregularly sampled time series data. As shown in Figure 2, Neural CDEs use a ”time-varying vector” 𝐗𝐗\mathbf{X}bold_X to create a representation of the local dynamics from a calculated interpolation of the irregularly sampled data upon which a hidden state is calculated. A linear map is then used to make a final prediction. Mathematically, this can be represented by the following set of equations:
|
Chen et al. 2018 [9] reported a novel approach to deep learning by using ordinary differential equations (ODEs) as continuous-depth models. In this article, the authors demonstrated ODE-based models can outperform traditional ResNet 16 architectures, offering improved efficiency and adaptability. Rubanova et al [17] have demonstrated ODE-basedmodels outperformed their RNN-based counterparts on irregularly-sampled data. Further to this study, Kidger et al, 2020 [10] have developed the Neural CDE as a continuous analog of an RNN in sepsis prediction. In this study the authors introduced a new model for handling irregularly sampled time series data using neural controlled differential equations (Neural CDEs). The paper demonstrates that Neural CDEs can effectively model complex dynamics in irregular time series, outperforming traditional methods. The authors also highlight the potential of Neural CDEs in various applications, including healthcare and finance, where irregular time series are common. Wong et al, 2021 [18] reported a specialized deep-learning model, Fibrosis-Net, designed to predict the progression of pulmonary fibrosis from chest CT images. This current work attempts to leverage the state-of-the-art Neural CDE approach in modeling time series data containing multiple modalities (structured data + image data).
|
The current study aimed to apply Neural CDEs to model the disease progression of pulmonary fibrosis and Alzheimer’s Disease using multimodal (structural and image data) irregularly sampled data.
|
Neural-controlled differential equations (Neural CDEs) are an extension of NDEs specifically designed to handle irregularly sampled multivariate time-series data[10].
|
B
|
We used waves 8-14 (2006-2018) from HRS via the RAND preprocessed files [18] (only these waves had all needed variables). Waves are measured every 2 years, but gait and grip are only measured every 4 years. We thus analyzed two sets of preprocessed data: one for predicting FPFP5 deficits (feature selection and prediction of decline) and another for survival, with spacings of 4 years and 2 years, respectively. We excluded individuals who entered the study aged below 60 (∼17000similar-toabsent17000\sim 17000∼ 17000) for a total of 13848 for survival study. For FPFP5 deficit analysis we further rejected individuals missing both gait and grip measurements, leaving 5619 individuals. After preprocessing, the median gap between measurements was 4.14.14.14.1 years (interquartile range: 0.30.30.30.3 years)
|
We used waves 8-14 (2006-2018) from HRS via the RAND preprocessed files [18] (only these waves had all needed variables). Waves are measured every 2 years, but gait and grip are only measured every 4 years. We thus analyzed two sets of preprocessed data: one for predicting FPFP5 deficits (feature selection and prediction of decline) and another for survival, with spacings of 4 years and 2 years, respectively. We excluded individuals who entered the study aged below 60 (∼17000similar-toabsent17000\sim 17000∼ 17000) for a total of 13848 for survival study. For FPFP5 deficit analysis we further rejected individuals missing both gait and grip measurements, leaving 5619 individuals. After preprocessing, the median gap between measurements was 4.14.14.14.1 years (interquartile range: 0.30.30.30.3 years)
|
We did not include our ELSA survival results in the main text because of the very low hazard observed, especially between waves 4 and 6. This is illustrated by Figure S6. HRS and NHANES survival overlap perfectly but ELSA has much higher survival, especially between waves 4 to 6, indicating an anomalously low hazard. We are therefore reluctant to put emphasis on the ELSA survival results but nevertheless include them here in the supplemental. Broadly, the ELSA results qualitatively agree with the HRS and NHANES results but with a notably lower mortality rate.
|
We used waves 4 and 6 from ELSA [19] (only these waves had all needed variables). We excluded 1055105510551055 individuals missing both gait and grip measurements, and an additional 13 with top-coded age. ELSA survival estimates were based on end-of-life interviews, which capture only a fraction of the deaths due to a variety of response rate and fieldwork issues [20]. This means that we necessarily underestimate the mortality rate because we are forced to assume that any individual without an end-of-life interview was censored instead of dying. We therefore present ELSA survival data only in supplemental (results qualitatively agree with NHANES and HRS but with a much lower mortality rate). After preprocessing, the median gap between measurements was 4.04.04.04.0 years (interquartile range: 0.00.00.00.0 years).
|
We used data from three national studies: HRS (longitudinal), ELSA (longitudinal), and NHANES (cross-sectional). Our goal is to understand relationships between variables. Definitions necessarily varied across the studies for both the FI and FP. We considered separate exclusions for predicting health deficits versus survival. When predicting health deficits we considered only individuals with all three measurements: weight, weakness and activity, and at least one of grip or gait (both grip and gait are proxies for sarcopenia [1] so imputation should pick up one from the other). We did not apply this cut when predicting survival, instead we considered all individuals irrespective of FPFP5 measurements. Our focus is on age- and frailty-related decline and hence we consider only individuals aged 60+limit-from6060+60 +; this also avoids issues with gated variables [14]. We also dropped any individuals with top-coded ages, since we are interested in age-dependence (i.e. ages capped at max values, age 90+ for ELSA and age 85+ for NHANES).
|
C
|
We pretrain a 7-billion-parameter autoregressive transformer language model, referred to as METAGENE-1, on a novel corpus of diverse metagenomic DNA and RNA sequences comprising over 1.5 trillion base pairs.
|
Specifically, we pretrain a 7-billion-parameter autoregressive transformer model, which we refer to as METAGENE-1, on a diverse corpus of DNA and RNA sequences comprising over 1.5 trillion base pairs sourced from wastewater samples, which were processed and sequenced using deep metagenomic (next-generation) sequencing (Bragg and Tyson, 2014, Consortium, 2021).
|
This dataset is sourced from a diverse set of human wastewater samples, which were processed and sequenced using deep metagenomic (next-generation) sequencing methods.
|
The dataset was generated using deep metagenomic sequencing, specifically leveraging Illumina sequencing technology, commonly referred to as next-generation sequencing (NGS) or high-throughput sequencing, in which billions of nucleic acid fragments are simultaneously sequenced in a massively parallel manner.
|
Our metagenomic foundation model differs from these prior works in a few important ways. First, our pretraining dataset comprises shorter metagenomic sequences (arising from metagenomic next-generation/massively-parallel sequencing methods) performed on samples of human wastewater collected across many locations; these samples contain potentially tens-of-thousands of species across a wide range of taxonomic ranks, and capture a representative distribution of the full human-adjacent microbiome. This includes both recognized species and many unknown or unclassified sequences (see Sec. 3.1). Another distinction is the model architecture: we use a decoder-only transformer model, akin to the Llama and GPT model families, which we further motivate in Sec. 3.3.
|
B
|
Let X𝑋Xitalic_X represent the DNA methylation data matrix, where X𝑋Xitalic_X is an N×M𝑁𝑀N\times Mitalic_N × italic_M matrix, with N𝑁Nitalic_N representing the number of samples and M𝑀Mitalic_M the total number of CpG sites. The vector y𝑦yitalic_y, which corresponds to the chronological ages of the N𝑁Nitalic_N samples, is an N×1𝑁1N\times 1italic_N × 1 vector.
|
In the 0-10 age range, the correlation is remarkably high, indicating a rapid rate of aging or growth during this period. A similarly fast rate of change is observed in the 10-20 age range. However, the aging rate appears to slow down in the 20-30 age range, and further deceleration is observed in the 40-50 and 70-80 age ranges. Interestingly, the 100+ age group shows a very high correlation, likely due to the smaller sample size and increased variability at the end of life. This high correlation suggests a faster rate of aging in this group, although this conclusion warrants validation with a larger sample size. These findings underscore the variability in aging rates across different life stages, with certain age windows exhibiting rapid changes in methylation patterns, while others reflect a slower pace of aging.
|
iTARGET-(34-60-78): This approach segments ages into biologically significant intervals: [0-34), [34-60), [60-78), and 78+.
|
We employ two age grouping strategies. The first divides the age range into decade-sized intervals: [0−10),[10−20),[20−30),…,[90−100),[100+[0-10),[10-20),[20-30),\ldots,[90-100),[100+[ 0 - 10 ) , [ 10 - 20 ) , [ 20 - 30 ) , … , [ 90 - 100 ) , [ 100 +. This approach is motivated by its interpretability, as decade intervals are commonly used and easily understood, making the results accessible to a broad audience. The second grouping is based on research by [24], which identified key inflection points in aging at approximately 34,60, and 78 years. This strategy divides the age range into four segments: [0-34), [34-60), [60-78), and 78+, aligning with significant biological and proteomic changes that correspond to shifts in aging patterns.
|
The third set of experiments compares two age grouping strategies for DNA methylation age prediction. The first strategy uses decade-sized intervals (e.g., [0-10), [10-20), …, [90-100)) for ease of interpretability. The second strategy, informed by [24], divides ages into segments at key inflection points: [0-34), [34-60), [60-78), and 78+, aligning with significant biological shifts observed in plasma proteome profiles.
|
C
|
A key mechanism of our model is the refractoriness of plasticity which prevents a continuous update of the post-synaptic neuron’s incoming weights while it is bursting. Figure 2D shows that refrectoriness is quite important for the asynchronous model to approximate the learning trajectory of the discrete model, as non-existent (1 step) or small (10 steps) refractoriness lead to poor average weight similarity. Interestingly, this refractory period has also been observed in in vitro experiments [9]. Also note that without a refractory period this model will learn to a non-factorised, winner-take-all representation similar to the one learned by the continuous model (see Appendix figures C.5 and C.6). Varying the threshold for bursting does not affect the learning much unless we set it to zero, in which case the network seems to diverge from the discrete version (Fig. 2E). Varying the hold period (i.e. the number of iterations the stimuli is held for the network to reach a stable state) does affect the learning trajectory (Fig. 2F) which is interesting since the standard version of discrete network (which uses the same learning rule as our model - Hebbian) stops learning as the hold period goes below 150 (see Appendix figures A.1).
|
We further explore how learning differs on these models by counting the number of synapses that are updated (i.e. have gradient entry different from 0). Figure 2G and 2H show the number of synapses updated at each Euler step during a small simulation window. As expected, the discrete network has a stair-case like shape since it only updates once every 500 steps (i.e. the hold period for this simulation). It is interesting to note that the asynchronous network follows a very similar trajectory to the discrete network for a random untrained network (Fig. 2G). However, as we train the networks, the discrete model seems to increase the number of updates while the asynchronous model slightly decreases them (Fig. 2I).
|
To assess the similarity between learning dynamics, we compare the learning trajectories of both asynchronous and continuous models with the discrete model. We initialize all models with the same weights and present the same stimulus sequence, and measure the cosine similarity of each neuron’s incoming weights (see Appendix E3). Figures 2A and 2B show that after learning most neurons in the asynchronous model are practically identical (similarity 0.95-1.0) to the neurons in the discrete model, while the neurons in the continuous model diverge significantly. The divergence of the weights from the continuous model begins right at the start of training (Fig. 2C), demonstrating that this network learns a qualitatively different representation.
|
Figure 2: The discrete and asynchronous models learn very similar representations. (A) Histogram of cosine similarities of the feed-forward weight between the discrete model and the continuous (orange) and asynchronous (blue) model. (B) As A, for the recurrent weights. (C) Average cosine similarity of feed-forward weights, compared to the discrete model, as the simulation evolves (colors as in A). (D) Average cosine similarity of feed-forward (green) and recurrent (M) weights between discrete and asynchronous models for different refractory period durations in the asynchronous model. Dashed lines are similarities of the continuous model. (E) As D, for different bursting thresholds in the asynchronous model. (F) As D, for different presentation durations in the asynchronous model. (G) Short simulation window showing the number of synapses updated at each Euler step for untrained networks. (H) Same as G but for trained networks. (I) Average number of synaptic updates taken at uniform intervals throughout the whole simulation.
|
A key mechanism of our model is the refractoriness of plasticity which prevents a continuous update of the post-synaptic neuron’s incoming weights while it is bursting. Figure 2D shows that refrectoriness is quite important for the asynchronous model to approximate the learning trajectory of the discrete model, as non-existent (1 step) or small (10 steps) refractoriness lead to poor average weight similarity. Interestingly, this refractory period has also been observed in in vitro experiments [9]. Also note that without a refractory period this model will learn to a non-factorised, winner-take-all representation similar to the one learned by the continuous model (see Appendix figures C.5 and C.6). Varying the threshold for bursting does not affect the learning much unless we set it to zero, in which case the network seems to diverge from the discrete version (Fig. 2E). Varying the hold period (i.e. the number of iterations the stimuli is held for the network to reach a stable state) does affect the learning trajectory (Fig. 2F) which is interesting since the standard version of discrete network (which uses the same learning rule as our model - Hebbian) stops learning as the hold period goes below 150 (see Appendix figures A.1).
|
A
|
We demonstrate that UniGuide performs competitively or even surpasses specialised baseline models, underscoring its practical relevance and transferability to diverse drug discovery scenarios.
|
Table 1: Ligand-Based Drug Design. Results taken from Chen et al. [14] are indicated with (∗)(^{*})( start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ). We highlight the best conditioning approach for
|
Table 2: Structure-Based Drug Design. Quantitative comparison of generated ligands for target pockets from the CrossDocked and Binding MOAD test sets. Results taken from the respective works are indicated with (∗)(^{*})( start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ).
|
conditioning approach for the DiffSBDD backbone in bold and underline the best approach over all methods.
|
Table 3: Linker Design. Results taken from Igashov et al. [13] are indicated with (∗)(^{*})( start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ). We underline the best method overall.
|
A
|
For the design of peptide sequences, ProT-Diff (Wang et al., 2024d) combines a pre-trained protein language model (PLM) ProtT5-XL-UniRef50 (Elnaggar et al., 2020) with an improved diffusion model to generate de novo candidate sequences for antimicrobial peptides (AMPs). AMP-Diffusion (Chen et al., 2024c) uses PLM ESM2 (Lin et al., 2023) for latent diffusion to design AMP sequences with desirable physicochemical properties. This model is versatile and has the potential to be extended to general protein design tasks.
|
Diff-AMP (Wang et al., 2024a) integrates thermodynamic diffusion and attention mechanisms into reinforcement learning to advance research on AMP generation. Sequence-based diffusion models complement structure-based approached by aiding in sequence-to-function or optimizing sequence design for structural goals.
|
ForceGen (Ni et al., 2024) develops a pLDM by combining the ESM Metagenomic Atlas (Lin et al., 2023), a model of the ESM family, with an attention-based diffusion model (Ni et al., 2023) to generate a protein sequence and structure with non-mechanical properties.
|
For the design of peptide sequences, ProT-Diff (Wang et al., 2024d) combines a pre-trained protein language model (PLM) ProtT5-XL-UniRef50 (Elnaggar et al., 2020) with an improved diffusion model to generate de novo candidate sequences for antimicrobial peptides (AMPs). AMP-Diffusion (Chen et al., 2024c) uses PLM ESM2 (Lin et al., 2023) for latent diffusion to design AMP sequences with desirable physicochemical properties. This model is versatile and has the potential to be extended to general protein design tasks.
|
ProteinGenerator (Lisanza et al., 2023) is a sequence space diffusion model based on RoseTTAFold that simultaneously generates protein sequences and structures. The success rate of ProteinGenerator in generating long sequences that fold to the designed structure is lower than RFDiffusion, this may reflect the intrinsic difference between diffusion in sequence and structure spaces.
|
A
|
The subunits of GABAA receptors include α𝛼\alphaitalic_α (GABRA1, GABRA2, GABRA3, GABRA4, GABRA5, GABRA6), β𝛽\betaitalic_β (GABRB1, GABRB2, GABRB3), γ𝛾\gammaitalic_γ (GABRG1, GABRG2, GABRG3), δ𝛿\deltaitalic_δ (GABRD), ϵitalic-ϵ\epsilonitalic_ϵ (GABRE), π𝜋\piitalic_π (GABRP), θ𝜃\thetaitalic_θ (GABRQ), and ρ𝜌\rhoitalic_ρ (GABRR1, GABRR2, GABRR3). In contrast, the GABAB receptor has two subunits: GABBR1 and GABBR2. Additionally, we collected other genes related to GABA receptors, including GABARAP, GABARAPL1, and GABARAPL2. In total, there are 24 targets associated with GABA receptors.
|
We extracted the corresponding 24 PPI networks by sequentially entering these gene names into the STRING database. Within each network, there is a core sub-network of proteins that interact directly with the GABA receptor, while the directly and indirectly interacting proteins together form the global network. We limited the number of proteins in each global network to 201. These 24 networks with total 4824 proteins are not completely independent, as some overlapping proteins are present. After removing the overlap proteins, 980 proteins are left in 24 PPI networks.
|
Compounds that act as agonists or antagonists of the GABA receptor exhibit pharmacological effects in anesthesia, which encourages the search for additional compounds that bind to the GABA receptor. The desired drugs must demonstrate specificity for the target protein without causing adverse side effects on other proteins. To evaluate the binding effects of small molecules on receptor proteins and other proteins within the PPI network, we collected the SMILES strings for each protein from the ChEMBL database and developed ML models. These models were then used to systematically analyze the side effects and repurposing potential of inhibitor compounds.
|
Protemic technology has increasingly presented a vast potential in anesthesia [8], and the use of proteomic tools to study anesthetic binding sites has offered a better understanding of the mechanisms of anesthetic action. Protein-protein interaction (PPI) networks at the proteomics level provide a systematic framework for exploring potential therapeutic strategies and their possible side effects [17]. These networks encompass the direct and indirect connections among various proteins, collectively driving the complex biological activities within an organism [9]. Utilizing the powerful String v11 database (https://string-db.org/), we can obtain extensive and diverse PPI datasets related to specific proteins or diseases [44, 9]. In the field of anesthesia research, the PPI network associated with GABA receptors can be extracted using the String v11 database. Through in-depth analysis of this network, we not only gain insights into how drugs interact with GABA receptors but also potentially uncover new drug targets. This approach could optimize pharmacotherapy regimens and reduce the incidence of side effects.
|
Figure 1: Flowchart of nearly optimal lead compounds screening for Gamma-aminobutyric acid (GABA) receptor agonists. a: The protein-protein interaction (PPI) networks of 24 GABA receptor subtypes involve 4824 proteins, and each receptor subtype has a core and global PPI network. Here only two PPI networks (GABRA1 and GABRA5) with several compounds are shown for simplicity. For more detailed information on the PPI networks, please refer to Table of the Supporting Information. b: The drug target interaction (DTI) network constructed against GABA receptors include 136 targets and 183250 inhibitor compounds that are collected from ChEMBL database in c. Here, only four targets (GABRA1, GABRA2, GABRA5, and GABBR2) with a few compounds are presented for simplicity. The yellow dashed lines mean the connections among 136 targets. d: Nearly optimal lead compounds were screened by two technical routes, the first being a predictive model for side effects and repurposing assessment as well as an ADMET screening model, and the second being molecular optimization of existing drugs.
|
A
|
The performance of the models were quantified by correlating features of the reconstructions with the stimuli. Specifically, features from reconstructed images and original stimuli are extracted using a pretrained AlexNet model at conv1, conv2, conv3, conv4, conv5 FC6, FC7, and FC8. Subsequently, each feature layer is analyzed using the Pearson correlation method to assess the correlation between the features of the reconstructions and those of the original stimuli.
|
The introduction of the IRFA model contributes several advancements to the field. First, it demonstrates that incorporating feature-based selective attention, alongside spatial attention, into brain representations significantly enhances the quality of natural image reconstruction from evoked brain responses in the V1, V4, and IT regions of the macaque.
|
In our investigation, we explored the impact of varying the number of dedicated feature channels within our model, specifically training with 4, 16, 32, and 64 features, to determine if increasing feature separability leads to enhanced model performance. Given the convolutional nature of the model, there is an inherent preservation of spatial integrity, as the convolution operation utilizes relevant spatial information for computation. Our hypothesis posited that while an increase in feature channels could introduce redundancy and augment the complexity of the model due to an increase in learnable parameters, it would nevertheless result in improved reconstruction capabilities. Supporting our hypothesis, the results clearly demonstrate that a greater number of features significantly enhances reconstruction quality as shown in Figure 2.
|
The U-NET’s output is compared with the target using three losses (an adversarial loss, a feature loss (VGG) and an L1 loss). The adversarial loss is a discriminator that is trained in parallel of the reconstruction model, which consists of 5 convolutional layers (see Fig. 1B). The feature loss uses the full set of convolutional layers of the VGG model. The model was implemented in Pytorch and optimized with Adam in 400 epochs with a batch size of 8. The implementation of the model can be found in the source code 111https://github.com/neuralcodinglab/IRFA.
|
We systematically trained the model to reconstruct images using sets of 4, 16, 32, and 64 learnable attention maps. This approach allows us to evaluate the optimal number of features required for effective reconstruction. The quality of the reconstructions is quantitatively compared with a baseline model, based on the end-to-end reconstruction model of Shen et al. [11]. Additionally, we assess the consistency of the spatial and feature inverse receptive fields (feature IRFs) across different stimuli by computing the standard deviation. Lastly, we employ a data-driven approach to visualize changes in feature RFs, using t-SNE for dimensionality reduction to explore whether electrodes exhibit preferences for certain features.
|
B
|
As we argued, the formation of self-conscious awareness in CTM should be a procedure instead of a single activity of any processor. Since self-conscious and conscious are two analogical concepts, we assume that they are generated by the same procedure. Also, the duality of self-consciousness requires the CTM could be aware of both the subjective self and objective self, which refers to the ability to be aware of the state itself and generate applicable gists(instructions, thoughts…). And we designed a group of special processors–the MIT to meet those constraints.
|
Self-conscious Awareness: the Self-conscious Content broadcast to all LTM processors and gets received.
|
Self-conscious Content: a chunk that wins the competition tree and reaches STM, in addition, this chunk should be made by MIT(But not all chunks made by MIT could be self-conscious content).
|
In this section, we discussed some functions of MIT, but there are still some issues that remain. Not all chunks generated by MIT will ultimately become self-conscious content, as we have mentioned in the definition. For example, the understanding of the outer world generated by MoTW does not involve self-awareness, even if it undergoes broadcast, it can only form conscious awareness rather than self-conscious awareness.
|
Based on the definition of conscious content and conscious awareness, we here present the definition of Self-conscious Content and Self-conscious Awareness as follows:
|
D
|
In this review, we aim to offer an in-depth exploration of the diverse dynamical behaviors encapsulated within the FHN model. The widespread adoption of the FHN model across physics and biology can be attributed to the model’s remarkable versatility in capturing a wide array of dynamical phenomena while maintaining a relatively simple mathematical formulation.
|
In conclusion, we hope our review will serve as a guide for understanding and using the diverse dynamical behaviors offered by the FHN model. Throughout our analysis, stability analyses and bifurcation studies provided insights into the observed dynamics. By exploring its applications across multiple disciplines, we aimed to inspire further exploration and application of the FHN model in diverse scientific domains.
|
Our review is structured around delineating the most prominent dynamical behaviors observed within the FHN model. We categorize our analysis into three primary sections: (i) examining the foundational FHN model, characterized by a system of two nonlinear coupled ordinary differential equations (ODEs) [Eq. 8]; (ii) studying the diffusively coupled FHN model, which introduces spatial coupling through diffusion [Eq. 10]; and (iii) exploring discretely coupled FHN equations [Eq. 11]. In each section, we complement our discussion of observed dynamics with thorough stability analyses and bifurcation studies. This approach allows readers to navigate the parameter space effectively, enabling them to target specific dynamical regimes of interest.
|
Lastly, we explored discretely coupled FHN equations [Eq. 11]. This is the broadest category as here one can consider a multitude of different network topologies and coupling terms. We focussed on synchronization properties in two coupled FHN modules, the existence of traveling waves when transitioning from continuous diffusive coupling to discrete coupling, and the emergence of chimera states characterized by spatio-temporal patterns of coherent and incoherent behavior.
|
We structured our analysis into three primary sections. Firstly, we examined the original FHN model [Eq. 8], discussing widely observed dynamical regimes such as monostability, multistability, relaxation oscillations, and excitability. We examined the role of local and global bifurcations in shaping these regimes, emphasizing the importance of time scale separation.
|
B
|
Intermediate Coupling (0.017<c<0.2950.017𝑐0.2950.017<c<0.2950.017 < italic_c < 0.295). At this level of coupling, the external system transitions to an excitable state in response to the internal pulse. Consequently, when an oscillation from a non-driven region intersects with a driven region, it triggers the latter to generate traveling pulses. These pulses continue periodically even after the internal pulse is gone, effectively becoming phase waves. Additionally, faster phase waves, mirroring the shape of the initial internal pulse, emerge and travel in the opposite direction, leading to interactions and eventual annihilation upon collision.
|
Comparable to the previous case, phase waves that are synchronized with the driving pulse emerge within the system (Fig. 8A). In addition, the system becomes excitable under the influence of the driving pulse and is subject to perturbations from adjacent oscillatory regions. These perturbations trigger an extended response that moves across the excitable region as a traveling pulse. These traveling pulses and phase waves move in opposing directions, leading to their mutual annihilation.
|
Intermediate Coupling (0.017<c<0.2950.017𝑐0.2950.017<c<0.2950.017 < italic_c < 0.295). At this level of coupling, the external system transitions to an excitable state in response to the internal pulse. Consequently, when an oscillation from a non-driven region intersects with a driven region, it triggers the latter to generate traveling pulses. These pulses continue periodically even after the internal pulse is gone, effectively becoming phase waves. Additionally, faster phase waves, mirroring the shape of the initial internal pulse, emerge and travel in the opposite direction, leading to interactions and eventual annihilation upon collision.
|
Our simple setup of interconnected FitzHugh-Nagumo (FHN) models, inspired by cellular structures, captures a wide range of phase-related dynamics. In this setup, a traveling wave within the internal system, analogous to a cell’s cytoplasm, drives the dynamics of the external system, similar to a cell’s cortex, without being influenced in return. The passage of this driving wave pulse through the external system can trigger one of three distinct regimes: oscillatory, excitable, and non-excitable. In the oscillatory regime, phase patterning emerges. The excitable regime is distinguished by the coexistence of phase waves at different velocities and traveling pulses. Meanwhile, the non-excitable regime has phase waves that become distorted.
|
Strong Coupling (c>0.295𝑐0.295c>0.295italic_c > 0.295). With strong coupling, the external system becomes non-excitable during the passage of the internal pulse, yet minor perturbations from equilibrium are still possible. Phase waves closely tied to the internal pulse form and traverse the ring, ultimately self-annihilating. Notably, these phase waves tend to widen and contract over time.
|
D
|
Table 1: Average Test set reconstruction results of our discrete auto-encoding method for several down-sampling ratios and (implicit) codebook sizes. For CASP-15 we report the median of the metrics due to the limited dataset size.
|
Note that a RMSD below 2222Å is considered of the order of experimental resolution and two proteins with a TM-score >0.5absent0.5>0.5> 0.5 are considered to have the same fold.
|
The root mean square distance (RMSD) between two structures is computed by calculating the square root of the average of the squared distances between corresponding atoms of the structures after the optimal superposition has been found. The TM-score (Y. and J., 2005) is a normalised measure of how similar two structures are, with a score of 1 denoting the structures are identical.
|
For context, two structures are considered to have similar fold when their TM-score exceeds 0.5 (Xu and Y., 2010) and a RMSD below 2Å is usually seen as approaching experimental resolution.
|
Table 2: Structure generation metrics for our method alongside baselines (and nature)specifically designed for protein structure generation. Self-consistent TM-score (scTM) and self-consistent RMSD (scRMSD) are two different ways to asses the designability of the generated structure. Note that while high novelty score is desirable, structures that are too far from the reference dataset can also be a sign of unfeasible proteins.
|
A
|
A future research direction is deriving the brain’s slow dynamics and learning mechanisms. Training many parameters over extended periods allows black-box models, such as neural networks representing protein networks, to act as homeostatic-control agents within cells. Promising results have emerged in simplified neuron models [12]. Transferring the approach to realistic neuron models creates a data-driven possibility to recover biological slow dynamics.
|
Limitations of the method relate to the fact that brain models extend beyond the cable equation: spike propagation with delays, stochastic models, reaction kinetics and ion dynamics, and Nernst potentials are not included in the method. These can be solved mathematically but would require modifications to the brain simulators, such as annotating spikes with gradient vectors.
|
In this section, we explain how the gradient model is derived from the cable equations and evaluate the approach. We start with a description of the cable equation as used in brain-simulation software, followed by a short description of the sensitivity equation. We then combine these equations to form the gradient model and discuss the stability of homeostatic mechanisms built on top of these gradients and the necessity of a forgetting parameter λ𝜆\lambdaitalic_λ. We conclude with the evaluation methods.
|
In this brief, we introduced gradient diffusion, a methodology that facilitates the calculation of parameter gradients for any existing, unmodified model-and-neurosimulator combination, thereby enabling support for homeostatic control. This approach allows for the efficient tuning of realistic neuron models and the implementation of homeostatic mechanisms in large networks, with the overarching goal of developing more robust, composable, and adaptable brain models that elucidate both the slow and fast dynamics of the brain.
|
The cable equation in eq. 8 is taken from the Arbor brain simulator, but the same equation is solved by NEURON or EDEN.
|
A
|
The parameter κ𝜅\kappaitalic_κ, which sets the relative lifespan of phages versus hosts and thus impacts the density of phages in the environment and accordingly the rate of infections, has surprisingly little impact on the dynamics, see appendix 15.
|
In both the phage therapy and biodetection examples, the crucial requirements are that bacteria are the limiting agent and that adsorption happens quickly (Goodridge, 2008). Both can be accomplished by high phage densities, creating potential scenarios where many adsorption events occur in a small time window and opening the door for simultaneous infections. Further, because phages can outnumber bacteria by an order of magnitude in many environments (Wasik and Turner, 2013), simultaneous infections may well be relevant in natural settings as well. If phages are concentrated around their bacterial hosts, say after a lysing event, then many adsorptions could occur in a short time frame, again creating the potential for simultaneous infections.
|
Our work focuses on the ecological impact of simultaneous infections but suggests an interesting evolutionary question. In our model, host death is inevitable after infection. However, the burst size λ𝜆\lambdaitalic_λ influences phage density and accordingly the rate at which hosts are infected and then lyse. Consequently λ𝜆\lambdaitalic_λ makes a suitable proxy for virulence, or the increase in host mortality due to infection, as noted in the discussion of Dennehy and Turner (2004). Though λ𝜆\lambdaitalic_λ is a key parameter in our analysis, it does not evolve.
|
Phages infect host cells by adsorbing (attaching) to receptors on the host cell wall and then delivering the genomic content into the host cytoplasm. Phages are much smaller than bacteria and each host cell presents multiple receptors that phages can bind to, so multiple phages can adsorb to a single host cell, though not all adsorptions necessarily lead to infection. Multiple adsorptions become increasingly likely at higher phage densities (Turner and Duffy, 2008; Christen et al., 1990) and can become the dominant transmission mode at sufficiently high densities (Turner and Chao, 1999). If phage densities are very high, it is possible that multiple phages simultaneously adsorb to and then infect the same host cell. Here, we explore the impact of simultaneous infections on phage-host ecology. We define simultaneous infection as infections that occur within a very small time window and distinguish between simultaneous infection and previously studied forms of co-infection, where after a pause an already infected host cell is infected again. Interestingly, given sufficient time phages can prevent multiple, sequential infections through host cell manipulations (Joseph et al., 2009) but these mechanisms are not applicable to the small time window relevant for simultaneous infections.
|
Our model describes a phage-host system where multiple phages can simultaneously infect a single host. Considering the high densities of phage proposed for use in various applications, as well as the high densities of phage in many natural settings, simultaneous infections are a natural and relevant infection dynamic to consider. Our results shed light on several ecological features of this system and suggest interesting evolutionary implications.
|
D
|
While it is certainly impressive that test tubes filled with some chemicals can be used for handwritten number recognition, it should of course be noted that this is certainly not the most efficient approach if the recognition of handwritten numbers is our primary goal. If the numbers are primarily a proof of principle, what then can these methods be useful for?
|
Moreover, there can be contexts where having a neural network that operates slower can actually be an advantage.333Thanks to Raphael Wittkowski for bringing this to my attention. A good example would be a network that processes temporal input signals, as is required, e.g., in speech recognition. In this case, it is advantageous if the system’s dynamics takes place on roughly the same time scales as the input signal rather than being significantly faster. Consider reservoir computing, where the employed physical system possesses a fading memory, as an example – if, after the second word of a sentence to be processed, all memory of the first word has already vanished, the system cannot process the sentence as a whole. A particular advantage of DNA neural networks in this context is that the speed at which the reactions take place (and thereby the speed at which the network operates) can be tuned by the experimenter, namely by changing the lengths of the toeholds (see Section 5).
|
In this chapter, I have provided a brief introduction to neural networks consisting of DNA, using as an example the winner-take-all network proposed in Ref. Cherry and Qian (2018). The input data is provided as a DNA strand and is processed via biochemical reactions. On this basis, it is possible to recognize handwritten digits using DNA. Moreover, I have briefly discussed a proposal for DNA-based reservoir computing Goudarzi et al. (2013). Such approaches constitute a promising starting point for the development of intelligent matter based on biological materials, and might also find applications in, for instance, medical contexts where input data is already present in a biochemical form.
|
In a nutshell, reservoir computing employs a dynamical system (the reservoir) that is driven by an input signal. The response of the system then serves as the input for a neural network with a single layer (the readout layer) that converts this response into the output layer. This readout layer is the only part of the system that is changed during the training process. Since the reservoir does not have to be changed, the reservoir can also be a physical system (for example one consisting of DNA). It is widely assumed222It is not really clear at present to what extend this is actually the case, see the chapter on reservoir computing in this book. that it is helpful if the reservoir operates close to criticality, such that the dynamics is rich enough to allow for interesting things to be read out from it (as opposed to, say, a system where all trajectories approach a certain stationary state irrespective of the input).
|
DNA neural networks require the input signal to have the form of a DNA strand. In general, this is a disadvantage since converting general input signals to DNA is quite an effort. This aspect can, however, turn into an advantage in contexts where the input signal takes the form of DNA strands (or at least that of biomolecules) anyway. This will primarily be the case in biomedical applications of neural networks. Suppose, for example, that a neural network has been trained to recognize genetic dispositions for a certain disease. If this network is implemented in DNA form, then one could just take a DNA sample from the patient, put it into a test tube, and then see a glowing test tube indicating that the gene one looks for is (or is not) present.
|
D
|
We used time-frequency decompositions (TFDs) as a unified representation for both the input (speech signal) and the output (MEG signal). These were computed using Short-Time Fourier Transform (STFT) applied to 3-second windows defined as described above.
|
The Short-Time Fourier Transform (STFT) was applied to 3-second windows, as previously defined. The STFT parameters, including the number of Fast Fourier Transform points (n-FFT) and the overlap between frames (hop length), were adjusted to ensure temporal alignment between the MEG and speech signals. This setup produced a consistent representation of both signals, with each 3-second window divided into 26 time frames.
|
In contrast, the temporal resolution offered by Magnetoencephalography (MEG), despite other limitations (e.g. lower sensitivity in deep brain structures), could provide a more detailed and dynamic insight into neural mechanisms underlying language comprehension and generation. In this work, we aimed to develop encoding models to advance our understanding of language processing through the lens of MEG data. An encoding model is a computational framework designed to map input stimuli to corresponding (i.e. elicited by the corresponding stimulus) neural activity. Here, we develop audio-to-MEG encoders using two types of representations for audio data, i.e. time-frequency decompositions derived from Short-time Fourier Transform (STFT) (Griffin and Lim, 1984), and latent representations generated by the wav2vec2 library (Baevski et al., 2020). Additionally, we built text-to-MEG encoders that incorporate embeddings from the Contrastive Language-Image Pretraining (CLIP) model (Radford et al., 2021) or GPT-2 (Radford et al., 2019) and compared the encoding performance between all pipelines (Figure 1). This comparison was performed with the goal of gaining insight into the neural processes involved in auditory and linguistic perception and advancing the computational strategies used for interpreting complex neural signals.
|
We used data from the MEG-MASC dataset (Gwilliams et al., 2023), specifically selecting 8 subjects as in the study by Oota et al. (2023). The dataset includes recordings from 208 MEG sensors as the subjects listened to a series of naturalistic spoken stories, selected from the Open American National Corpus, namely “Cable Spool Boy”, “LW1”, “Black willow”, and “Easy money”. For pre-processing the raw MEG data, we employed the MNE-Python library (Appelhoff et al., 2019), which involved a) bandpass filtering (0.5-30.0 Hz) (Marzetti et al., 2013) b) segmentation into windows (length = 3 s) which begin in correspondence with a word (stimulus) onset, and typically encompass approximately 5 words; c) window-wise baseline correction using 200 ms of signal taken immediately before the stimulus to minimize noise from non-task-related variations (the mean signal across the baseline period is subtracted from all time points within the epoch); d) channel-wise clipping of amplitude signals between the fifth and ninety-fifth percentile. Also, the audio and MEG data were originally sampled at 16000 Hz and 1000 Hz, respectively. Given the necessity of temporal alignment between each audio window and the corresponding MEG segment, pre-processing resulted in a collection of 48000 time points for the audio signals (3s×16000Hz3𝑠16000𝐻𝑧3s\times 16000Hz3 italic_s × 16000 italic_H italic_z) and of 3000 time points (3s×1000Hz3𝑠1000𝐻𝑧3s\times 1000Hz3 italic_s × 1000 italic_H italic_z, baseline-corrected) for MEG each channel/sensor. From this point on with the 3-second windows term we will also include the baseline period in the case of MEG signal.
|
We used time-frequency decompositions (TFDs) as a unified representation for both the input (speech signal) and the output (MEG signal). These were computed using Short-Time Fourier Transform (STFT) applied to 3-second windows defined as described above.
|
A
|
Motivated by this, in the present paper, we formulate a reinforcement learning (RL) strategy where an agent performs run-and-tumble motion in an environment with inhomogeneous concentration of attractant. For simplicity, we consider one spatial dimension here. The agent can either persist moving in the same direction, or can reverse its direction. We define a cost matrix which assigns a cost to each of these two actions, depending on the recent history of the agent’s trajectory. With a small probability ϵitalic-ϵ\epsilonitalic_ϵ the agent ‘explores’ its surroundings by performing a random action (persist or reverse) irrespective of its cost, and with the remaining probability (1−ϵ)1italic-ϵ(1-\epsilon)( 1 - italic_ϵ ), the agent ‘exploits’ its previous learning experience to decide its next action. The agent uses ‘Q𝑄Qitalic_Q-learning’ method to learn and optimise its action based on its experience. Q𝑄Qitalic_Q-learning employs a non-supervised learning method and it is a simple version of RL which is based on optimizing a value function with respect to a given environment [6].
|
In the case when all the concentration peaks are of the same size, then starting from a uniform initial position, the agent is able to localize most strongly near the peak regions when exploration rate is low and learning rate is high. However, when the agent starts from the vicinity of one particular attractant peak, then it remains trapped there and even after a large time has passed, the agent is not able to learn about its complete environment if the exploration rate is low or learning rate is high. In this case, an optimum range of these two rates works best for the agent.
|
In this work, we have considered an RL agent which is exploring its environment via run-and-tumble motion. We are interested in the question: under what condition the RL strategy is most efficient. We quantify efficiency by the probability to find the agent in the attractant-rich region in the long time limit, and also by how much the agent has learnt about its environment. A successful RL strategy allows the agent to quickly learn about its attractant environment and localize in the favorable region. We find depending on the nature of the attractant concentration profile, different RL strategies work best.
|
To capture the trapping effect, therefore, we define another performance criterion, which measures if all the favorable regions in the environment has been sampled by the agent. Starting from the vicinity of one peak of [L](x)delimited-[]𝐿𝑥[L](x)[ italic_L ] ( italic_x ), we measure the probability to find the agent in the vicinity of another peak, as a function of time. More specifically, we choose the initial condition where the agent is equally likely to be located in a region R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT with initial position lying in the range 0<x<λ20𝑥𝜆20<x<\dfrac{\lambda}{2}0 < italic_x < divide start_ARG italic_λ end_ARG start_ARG 2 end_ARG. As a function of time we measure the probability to find the agent in the region R𝑅Ritalic_R which is in the neighborhood of the next peak with λ<x<3λ2𝜆𝑥3𝜆2\lambda<x<\dfrac{3\lambda}{2}italic_λ < italic_x < divide start_ARG 3 italic_λ end_ARG start_ARG 2 end_ARG. We denote this probability as 𝒫R|R0(t)subscript𝒫conditional𝑅subscript𝑅0𝑡{\cal P}_{R|R_{0}}(t)caligraphic_P start_POSTSUBSCRIPT italic_R | italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_t ), which is expected to grow with time t𝑡titalic_t and then saturate as steady state is reached. However, as seen in our data in Figs. 6(a) and (b) the saturation can be logarithmically slow with time, specially when ϵitalic-ϵ\epsilonitalic_ϵ (α𝛼\alphaitalic_α) is small (large). It is expected, since going from one peak to another, the agent needs to cross a region where [L](x)delimited-[]𝐿𝑥[L](x)[ italic_L ] ( italic_x ) is quite small. This means the agent has to execute long downhill runs which get increasingly difficult for small (large) ϵitalic-ϵ\epsilonitalic_ϵ (α𝛼\alphaitalic_α). For a logarithmically slow relaxation, it is often not feasible to evaluate the performance of the agent based on its steady state properties. Rather, the behavior of 𝒫R|R0(t)subscript𝒫conditional𝑅subscript𝑅0𝑡{\cal P}_{R|R_{0}}(t)caligraphic_P start_POSTSUBSCRIPT italic_R | italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_t ) for large times (but not large enough to reach steady state) can be studied. If it is found that for a certain large value of t=t1𝑡subscript𝑡1t=t_{1}italic_t = italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, the probability 𝒫R|R0(t1)subscript𝒫conditional𝑅subscript𝑅0subscript𝑡1{\cal P}_{R|R_{0}}(t_{1})caligraphic_P start_POSTSUBSCRIPT italic_R | italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) has low value, then that means the agent has not yet learned about its full environment, although a long time has passed. This indicates inefficient learning. On the other hand, a large value of 𝒫R|R0(t1)subscript𝒫conditional𝑅subscript𝑅0subscript𝑡1{\cal P}_{R|R_{0}}(t_{1})caligraphic_P start_POSTSUBSCRIPT italic_R | italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) means that the agent managed to learn about its environment and determined its preference for a particular location based on this knowledge. The RL strategy in this case is working well.
|
We are interested in the long time behavior of the agent. In particular, we measure the effectiveness of the RL algorithm by evaluating the performance of the agent at large times. We use different performance criteria like how strongly the agent is able to localize in the high attractant zones, or how quickly it is able to find the region where the attractant concentration is highest, etc. The efficiency of the RL strategy can also be measured from how well the agent has learnt about its environment and whether this learning is used in its long time behavior. If the agent is found to behave based on only partial knowledge of its environment even after a long time has passed, then the RL strategy is deemed rather inefficient.
|
D
|
The receptive field (RF) size corresponds to the size of the result of the affine transformation applied to the original image. Specifically, this transformed region determines the portion of the original image contributing to the neural response at the layer being analyzed. By using the parameters of the best-performing model, we identified the effective RF size in the original image space, ensuring consistency with the spatial transformations and feature extraction applied during analysis.
|
Our analysis shows that the predicted activity from the AFRT model correlate higher with ground truth signals compared to the predicted activity from the baseline model Linear-AlexNet (Fig. 2). We plotted the correlation values for all the best performing models from conv1, conv2 and conv3 layers. Each point represents a signal-wise model, and the color represents the model type (blue is AFRT, red is Linear-AlexNet). Although the baseline model makes use of a significant higher amount of features, our results show that models containing the affine components perform better (with a correlation value of 0.5 or higher). Overall, AFRT encodes MUA activity more accurately than the Linear-AlexNet model and our results also show that AFRT is less prone to overfitting.
|
To evaluate performance, we trained three encoding models for each MUA signal, using features from layers 1, 2, and 5. For each MUA, we selected the best-performing model out of the three trained models based on the Pearson correlation value between the predicted response and the target response. This method not only provides a large space of models to select from but also identifies which layer contains the most informative features for the encoding task.
|
Our model demonstrates substantial enhancements in predicting multi-unit activity (MUA) across the V1, V4, and IT regions of the macaque, outperforming traditional models that lack biologically-inspired constraints. Additionally, AFRT significantly reduces the number of required parameters by transforming feature responses into scalars instead of entire feature maps, as illustrated in Figure 5. This reduction not only simplifies the model complexity but also improves the interpretability and efficiency of response predictions. Furthermore, while our study employs basic assumptions—that each neural signal corresponds to a non-rotating spatial receptive field—the proposed AFRT model is not inherently limited to these constraints. Indeed, spatial transformer networks could extend this model by incorporating additional parameters, allowing for rotation of the spatially transformed image [14].
|
Figure 2: Comparison between the performance values of the AFRT model (blue) and the baseline model (red). Single blue dots show correlation values for trained AFRT models trained and red dots show the values of baseline models trained. The dashed line show the average across all models. Both models are trained on the training set and values are evaluated using the test set. The top row shows all the models that were trained using three feature layers per electrode (1122 models for V1, 507 for V4, and 372 for IT) whereas the bottom row shows the best selected performance (374 models for V1, 169 for V4, and 124 for IT).
|
A
|
Insights gained from this study can be extended to other diseases whose mechanisms are yet to be clarified.
|
This study was supported by grants from the Ministerio de Ciencia e Innovación (PID2021-126961OB-I00, PLEC2022-009401); Instituto de Salud Carlos III, Ministerio de Ciencia e Innovación and European Regional Development Fund (ERDF A way of making Europe) (Red de Terapias Avanzadas, RD21/0017/0020); European Union NextGeneration EU/PRTR; Generalitat de Catalunya (2021 SGR 01094); “la Caixa” Foundation under the grant agreements LCF/PR/HR21-00622 and LCF/BQ/PI24/12040007; and Red Española de Supercomputación (RES) under project BCV-2024-2-0010.
|
This study is based on striatal snRNA-seq obtained from two post-natal stages, 8 and 12 weeks old, from wild-type (WT) and an HD mouse model [9]. First, we generated the single-cell gene count matrices using CellRanger [10]. Next, we used Seurat [11] to normalize the counts and identify highly variable genes, resulting in a normalized matrix consisting of 42,800 cells and the top 2,500 most variable genes. Then, we clustered the cells, resulting in 17 clusters (Fig. 1). Finally, we used the FindMarkers function in Seurat to generate cluster marker genes that allowed us to assign a cell-type to each cluster based on the comparison with known specific cell-type gene markers described in the literature. This way we have identified, among the highest cell count clusters, two clusters corresponding to SPNs: cluster corresponding to indirect-pathway SPNs (iSPNs), characterized by the expression of Drd2, Adora2a, Penk, and Oprd1; and cluster corresponding to direct-pathway SPNs (dSPNs), characterized by the expression of Drd1, Sp9, and Sp8. Since SPNs are the most abundant neuronal population in the striatum and specifically affected in HD [12], we chose to focus our analysis on the aforementioned SPN clusters.
|
To conduct the XAI analysis, we used the KernelExplainer from SHAP, which uses a special weighted linear regression to compute the importance of each feature. An explainer was created using the training data set as the background to generate explanations for the HD cells in each cluster from the test set. We used this approach to identify the set of informative genes driving the prediction for this genotype. Furthermore, we took advantage of the single-cell resolution to compute the Pearson correlation coefficient between the gene expression and the individual SHAP values.
|
Figure 3: Barplot displaying top 20 DEGs from DESEq2 based on absolute LFC for clusters iSPN (left) and dSPN (right). Bars are colour-coded to indicate HD upregulated genes (blue) and down-regulated (red).
|
A
|
Different input file types are supported with their idiosyncratic options, which all are represented by a uniform data type that we call a (potential) Variant. A Variant describes a single position on a chromosome, here, position 123 on chromosome Chr1, and stores the reference and alternative base for file formats that support them (and otherwise infers them from the two most common bases at the position, or from a provided reference genome file). This is similar to the data of the sync format.
|
For each sample of the input (e. g., read groups in SAM files, columns in mpileup files, or sample frequencies from tabular formats), the nucleotide base counts (ACGT) of the pooled reads are stored, including counts for "any" (N) and "deletion" (D), which are however ignored in most statistics.
|
If a reference genome is provided, it is used to fill in the reference bases when using file formats that do not store these. When multiple input files are provided (even of different formats, and with missing data), they are traversed in parallel, using either the intersection or the union of the genomic positions present in the files, and internally combined as if they were one file with multiple samples. Samples can furthermore be grouped by merging their counts, for instance to combine different sequencing runs into an (artificial) pool.
|
Most commonly, our input are sequence reads or read-derived allele counts, as those fully capture the effects of both sources of noise, which can then be corrected for. Our implementation however can also be used with inferred or adjusted allele frequencies as input, for instance using information from the haplotype frequencies of the founder generation in E&R experiments (7, 8). These can elevate the effective coverage, and thus improve the calling of low-frequency alleles, which can otherwise be difficult to distinguish from sequencing errors (2). With these reconstructed allele frequencies, the correction for read depth is less relevant, but the correction for pool size remains important. It is hence convenient to be able to use the same framework for these data, which existing implementations do not offer.
|
In contrast, and in addition to these formats, grenedalf can directly work with other standard file formats such as sam/bam (12), cram (13), vcf (using the "AD" allelic depth field) (14), and a variety of simple table formats, for reading allele counts or allele frequencies from pool sequencing data. All formats can also optionally be gzipped (decompression is done asynchronously for speed), and their idiosyncratic options (such as filtering by read flags or splitting by read groups for sam/bam) are supported. This eliminates the need for intermediate file conversions, reduces overhead for file bookkeeping, disk space, and processing time (see Supplement), and increases user convenience.
|
A
|
Minimum RMSD: This metric provides insight into the average best-case alignment between the generated conformations and the reference set, indicating the overall accuracy of the model.
|
Maximum RMSD: The maximum RMSD highlights the worst outliers among the generated conformations, revealing cases where the model may struggle to produce accurate structures.
|
Minimum RMSD: This metric provides insight into the average best-case alignment between the generated conformations and the reference set, indicating the overall accuracy of the model.
|
MAT-P (Mean RMSD-Precision): MAT-P scores reflect the mean RMSD between each generated conformation and its nearest reference counterpart. It calculates the average structural deviation between the generated and reference conformations. Low MAT-P scores indicate that the generated conformations closely resemble the reference structures in terms of structural similarity.
|
Evaluating the performance of conformation generation models involves assessing their ability to produce a diverse set of accurate molecular structures. The root-mean-square deviation (RMSD) metric is a crucial measure in this context, which quantifies the structural differences between the generated conformations and a reference set after aligning them using algorithms like Kabsch.
|
A
|
README.md exists but content is empty.
- Downloads last month
- 7