I do research in large user facilities and manage a lab in one to help scientists from all fields.
Machine learning is becoming more and more wide spread in all kinds of scientific fields...
In the past two decades many tech companies (Google, Facebook, etc.) made it to the Fortune 500 list by analyzing large amounts of data with newly developed computer algorithms. This field is called Machine Learning and can even have a big impact on modern physics, chemistry and biology.
A group of Australian physicists and computer scientists built a proof-of-principle experiment that consisted of a very complicated laser system to produce ultra-cold atoms in Bose-Einstein condensates (Nobel Prize in Physics, 2001). The original experiments took quite a long time, however the machine learning algorithm mastered the optimization task in one hour!
There are many answers to this question and today a big part of them is just speculation. Some are afraid of the future capabilities of AI, others promote and open source research in the field, but one thing is certain: computer algorithms can do more and more each day.
Image recognition is one big success of machine learning. The tools currently available are often more accurate than trained doctors due to the large databases the algorithms can be trained on.
Big data, such as gene sequencing can be also examined with computer codes to uncover hidden links between parts of the genom.
Computers can now predict the properties of new materials only based on calculations! This saves unnecessary efforts in synthesis, which also has a very big and positive environmental impact.
...are endless. Since Machine Learning in Natural Sciences is a relatively new field, there are many low hanging fruits, but the field is saturating quickly.
Abstract: Identification of underlying mechanisms behind drugs side effects is of extreme interest and importance in drugs discovery today. Therefore machine learning methodology, linking such different multi features aspects and able to make predictions, are crucial for understanding side effects.In this paper we present DrugClust, a machine learning algorithm for drugs side effects prediction. DrugClust pipeline works as follows: first drugs are clustered with respect to their features and then side effects predictions are made, according to Bayesian scores. Biological validation of resulting clusters can be done via enrichment analysis, another functionality implemented in the methodology. This last tool is of extreme interest for drug discovery, given that it can be used as a validation of the clusters obtained, as well as for the study of new possible interactions between certain side effects and nontargeted pathways.Results were evaluated on a 5-folds cross validations procedure, and extensive comparisons were made with available datasets in the field: Zhang et al. (2015), Liu et al. (2012) and Mizutani et al. (2012). Results are promising and show better performances in most of the cases with respect to the available literature.DrugClust is an R package freely available at: https://cran.r-project.org/web/packages/DrugClust/index.html.
Pub.: 10 Apr '17, Pinned: 12 Apr '17
Abstract: Plasmonic sensors have been used for a wide-range of biological and chemical sensing applications. Emerging nano-fabrication techniques have enabled these sensors to be cost-effectively mass-manufactured onto various types of substrates. To accompany these advances, major improvements in sensor read-out devices must also be achieved to fully realize the broad impact of plasmonic nano-sensors. Here, we propose a machine learning framework which can be used to design low-cost and mobile multi-spectral plasmonic readers that do not use traditionally employed bulky and expensive stabilized light-sources or high-resolution spectrometers. By training a feature selection model over a large set of fabricated plasmonic nano-sensors, we select the optimal set of illumination light-emitting-diodes needed to create a minimum-error refractive index prediction model, which statistically takes into account the varied spectral responses and fabrication-induced variability of a given sensor design. This computational sensing approach was experimentally validated using a modular mobile plasmonic reader. We tested different plasmonic sensors with hexagonal and square periodicity nano-hole arrays, and revealed that the optimal illumination bands differ from those that are 'intuitively' selected based on the spectral features of the sensor, e.g., transmission peaks or valleys. This framework provides a universal tool for the plasmonics community to design low-cost and mobile multi-spectral readers, helping the translation of nano-sensing technologies to various emerging applications such as wearable sensing, personalized medicine, and point-of-care diagnostics. Beyond plasmonics, other types of sensors that operate based on spectral changes can broadly benefit from this approach, including e.g., aptamer-enabled nanoparticle assays and graphene-based sensors, among others.
Pub.: 28 Jan '17, Pinned: 12 Apr '17
Abstract: The chemical composition of core-shell nanoparticle clusters have been determined through principal component analysis (PCA) and independent component analysis (ICA) of an energy-dispersive X-ray (EDX) spectrum image (SI) acquired in a scanning transmission electron microscope (STEM). The method blindly decomposes the SI into three components, which are found to accurately represent the isolated and unmixed X-ray signals originating from the supporting carbon film, the shell, and the bimetallic core. The composition of the latter is verified by and is in excellent agreement with the separate quantification of bare bimetallic seed nanoparticles.
Pub.: 12 Mar '15, Pinned: 12 Apr '17
Abstract: We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves.
Pub.: 10 Mar '12, Pinned: 12 Apr '17
Abstract: Machine learning is used to approximate density functionals. For the model problem of the kinetic energy of noninteracting fermions in 1D, mean absolute errors below 1 kcal/mol on test densities similar to the training set are reached with fewer than 100 training densities. A predictor identifies if a test density is within the interpolation region. Via principal component analysis, a projected functional derivative finds highly accurate self-consistent densities. The challenges for application of our method to real electronic structure problems are discussed.
Pub.: 26 Sep '12, Pinned: 12 Apr '17
Abstract: We present a molecular dynamics scheme which combines first-principles and machine-learning (ML) techniques in a single information-efficient approach. Forces on atoms are either predicted by Bayesian inference or, if necessary, computed by on-the-fly quantum-mechanical (QM) calculations and added to a growing ML database, whose completeness is, thus, never required. As a result, the scheme is accurate and general, while progressively fewer QM calls are needed when a new chemical process is encountered for the second and subsequent times, as demonstrated by tests on crystalline and molten silicon.
Pub.: 21 Mar '15, Pinned: 12 Apr '17
Abstract: We use machine-learning methods on local structure to identify flow defects-or particles susceptible to rearrangement-in jammed and glassy systems. We apply this method successfully to two very different systems: a two-dimensional experimental realization of a granular pillar under compression and a Lennard-Jones glass in both two and three dimensions above and below its glass transition temperature. We also identify characteristics of flow defects that differentiate them from the rest of the sample. Our results show it is possible to discern subtle structural features responsible for heterogeneous dynamics observed across a broad range of disordered materials.
Pub.: 31 Mar '15, Pinned: 12 Apr '17
Abstract: Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.
Pub.: 06 Jun '15, Pinned: 12 Apr '17
Abstract: Author(s): Vedran Dunjko, Jacob M. Taylor, and Hans J. BriegelThe emerging field of quantum machine learning has the potential to substantially aid in the problems and scope of artificial intelligence. This is only enhanced by recent successes in the field of classical machine learning. In this work we propose an approach for the systematic treatment of machin…[Phys. Rev. Lett. 117, 130501] Published Tue Sep 20, 2016
Pub.: 20 Sep '16, Pinned: 12 Apr '17
Abstract: Recent advances in machine learning methods, along with successful applications across a wide variety of fields such as planetary science and bioinformatics, promise powerful new tools for practicing scientists. This viewpoint highlights some useful characteristics of modern machine learning methods and their relevance to scientific applications. We conclude with some speculations on near-term progress and promising directions.
Pub.: 15 Sep '01, Pinned: 12 Apr '17
Abstract: High Resolution Melt (HRM) is a versatile and rapid post-PCR DNA analysis technique primarily used to differentiate sequence variants among only a few short amplicons. We recently developed a one-vs-one support vector machine algorithm (OVO SVM) that enables the use of HRM for identifying numerous short amplicon sequences automatically and reliably. Herein, we set out to maximize the discriminating power of HRM + SVM for a single genetic locus by testing longer amplicons harboring significantly more sequence information. Using universal primers that amplify the hypervariable bacterial 16 S rRNA gene as a model system, we found that long amplicons yield more complex HRM curve shapes. We developed a novel nested OVO SVM approach to take advantage of this feature and achieved 100% accuracy in the identification of 37 clinically relevant bacteria in Leave-One-Out-Cross-Validation. A subset of organisms were independently tested. Those from pure culture were identified with high accuracy, while those tested directly from clinical blood bottles displayed more technical variability and reduced accuracy. Our findings demonstrate that long sequences can be accurately and automatically profiled by HRM with a novel nested SVM approach and suggest that clinical sample testing is feasible with further optimization.
Pub.: 19 Jan '16, Pinned: 12 Apr '17
Abstract: This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
Pub.: 24 Jun '15, Pinned: 12 Apr '17
Abstract: Geochemical discrimination has recently been recognised as a potentially useful proxy for identifying tsunami deposits in addition to classical proxies such as sedimentological and micropalaeontological evidence. However, difficulties remain because it is unclear which elements best discriminate between tsunami and non-tsunami deposits. Herein, we propose a mathematical methodology for the geochemical discrimination of tsunami deposits using machine-learning techniques. The proposed method can determine the appropriate combinations of elements and the precise discrimination plane that best discerns tsunami deposits from non-tsunami deposits in high-dimensional compositional space through the use of data sets of bulk composition that have been categorised as tsunami or non-tsunami sediments. We applied this method to the 2011 Tohoku tsunami and to background marine sedimentary rocks. After an exhaustive search of all 262,144 (= 2(18)) combinations of the 18 analysed elements, we observed several tens of combinations with discrimination rates higher than 99.0%. The analytical results show that elements such as Ca and several heavy-metal elements are important for discriminating tsunami deposits from marine sedimentary rocks. These elements are considered to reflect the formation mechanism and origin of the tsunami deposits. The proposed methodology has the potential to aid in the identification of past tsunamis by using other tsunami proxies.
Pub.: 18 Nov '14, Pinned: 12 Apr '17
Abstract: Advanced materials characterization techniques with ever-growing data acquisition speed and storage capabilities represent a challenge in modern materials science, and new procedures to quickly assess and analyze the data are needed. Machine learning approaches are effective in reducing the complexity of data and rapidly homing in on the underlying trend in multi-dimensional data. Here, we show that by employing an algorithm called the mean shift theory to a large amount of diffraction data in high-throughput experimentation, one can streamline the process of delineating the structural evolution across compositional variations mapped on combinatorial libraries with minimal computational cost. Data collected at a synchrotron beamline are analyzed on the fly, and by integrating experimental data with the inorganic crystal structure database (ICSD), we can substantially enhance the accuracy in classifying the structural phases across ternary phase spaces. We have used this approach to identify a novel magnetic phase with enhanced magnetic anisotropy which is a candidate for rare-earth free permanent magnet.
Pub.: 16 Sep '14, Pinned: 12 Apr '17
Abstract: The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.
Pub.: 01 Oct '13, Pinned: 12 Apr '17
Abstract: The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance.
Pub.: 20 Jan '16, Pinned: 12 Apr '17
Abstract: Inorganic–organic hybrid materials1, 2, 3 such as organically templated metal oxides1, metal–organic frameworks (MOFs)2 and organohalide perovskites4 have been studied for decades, and hydrothermal and (non-aqueous) solvothermal syntheses have produced thousands of new materials that collectively contain nearly all the metals in the periodic table5, 6, 7, 8, 9. Nevertheless, the formation of these compounds is not fully understood, and development of new compounds relies primarily on exploratory syntheses. Simulation- and data-driven approaches (promoted by efforts such as the Materials Genome Initiative10) provide an alternative to experimental trial-and-error. Three major strategies are: simulation-based predictions of physical properties (for example, charge mobility11, photovoltaic properties12, gas adsorption capacity13 or lithium-ion intercalation14) to identify promising target candidates for synthetic efforts11, 15; determination of the structure–property relationship from large bodies of experimental data16, 17, enabled by integration with high-throughput synthesis and measurement tools18; and clustering on the basis of similar crystallographic structure (for example, zeolite structure classification19, 20 or gas adsorption properties21). Here we demonstrate an alternative approach that uses machine-learning algorithms trained on reaction data to predict reaction outcomes for the crystallization of templated vanadium selenites. We used information on ‘dark’ reactions—failed or unsuccessful hydrothermal syntheses—collected from archived laboratory notebooks from our laboratory, and added physicochemical property descriptions to the raw notebook information using cheminformatics techniques. We used the resulting data to train a machine-learning model to predict reaction success. When carrying out hydrothermal synthesis experiments using previously untested, commercially available organic building blocks, our machine-learning model outperformed traditional human strategies, and successfully predicted conditions for new organically templated inorganic product formation with a success rate of 89 per cent. Inverting the machine-learning model reveals new hypotheses regarding the conditions for successful product formation.
Pub.: 04 May '16, Pinned: 12 Apr '17
Abstract: The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.
Pub.: 29 Nov '16, Pinned: 12 Apr '17
Abstract: The research on resistance genes (R-gene) plays a vital role in bioinformatics as it has the capability of coping with adverse changes in the external environment, which can form the corresponding resistance protein by transcription and translation. It is meaningful to identify and predict R-gene of Larimichthys crocea (L.Crocea). It is friendly for breeding and the marine environment as well. Large amounts of L.Crocea's immune mechanisms have been explored by biological methods. However, much about them is still unclear. In order to break the limited understanding of the L.Crocea's immune mechanisms and to detect new R-gene and R-gene-like genes, this paper came up with a more useful combination prediction method, which is to extract and classify the feature of available genomic data by machine learning. The effectiveness of feature extraction and classification methods to identify potential novel R-gene was evaluated, and different statistical analyzes were utilized to explore the reliability of prediction method, which can help us further understand the immune mechanisms of L.Crocea against pathogens. In this paper, a webserver called LCRG-Pred is available at http://server.malab.cn/rg_lc/.
Pub.: 07 Dec '16, Pinned: 12 Apr '17
Abstract: The Dishevelled/EGL-10/Pleckstrin (DEP) domain-containing (DEPDC) proteins have seven members. However, whether this superfamily can be distinguished from other proteins based only on the amino acid sequences, remains unknown. Here, we describe a computational method to segregate DEPDCs and non-DEPDCs. First, we examined the Pfam numbers of the known DEPDCs and used the longest sequences for each Pfam to construct a phylogenetic tree. Subsequently, we extracted 188-dimensional (188D) and 20D features of DEPDCs and non-DEPDCs and classified them with random forest classifier. We also mined the motifs of human DEPDCs to find the related domains. Finally, we designed experimental verification methods of human DEPDC expression at the mRNA level in hepatocellular carcinoma (HCC) and adjacent normal tissues. The phylogenetic analysis showed that the DEPDCs superfamily can be divided into three clusters. Moreover, the 188D and 20D features can both be used to effectively distinguish the two protein types. Motif analysis revealed that the DEP and RhoGAP domain was common in human DEPDCs, human HCC and the adjacent tissues that widely expressed DEPDCs. However, their regulation was not identical. In conclusion, we successfully constructed a binary classifier for DEPDCs and experimentally verified their expression in human HCC tissues.
Pub.: 22 Dec '16, Pinned: 12 Apr '17
Abstract: Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the "derivation cohort" to develop dose-prediction algorithm, while the remaining 20% constituted the "validation cohort" to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.
Pub.: 09 Feb '17, Pinned: 12 Apr '17
Abstract: Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems.
Pub.: 09 Mar '17, Pinned: 12 Apr '17
Abstract: A most challenging task for scientists that are involved in the study of ageing is the development of a measure to quantify health status across populations and over time. In the present study, a Bayesian multilevel Item Response Theory approach is used to create a health score that can be compared across different waves in a longitudinal study, using anchor items and items that vary across waves. The same approach can be applied to compare health scores across different longitudinal studies, using items that vary across studies. Data from the English Longitudinal Study of Ageing (ELSA) are employed. Mixed-effects multilevel regression and Machine Learning methods were used to identify relationships between socio-demographics and the health score created. The metric of health was created for 17,886 subjects (54.6% of women) participating in at least one of the first six ELSA waves and correlated well with already known conditions that affect health. Future efforts will implement this approach in a harmonised data set comprising several longitudinal studies of ageing. This will enable valid comparisons between clinical and community dwelling populations and help to generate norms that could be useful in day-to-day clinical practice.
Pub.: 11 Mar '17, Pinned: 12 Apr '17
Abstract: Currently, oxygen uptake () is the most precise means of investigating aerobic fitness and level of physical activity; however, can only be directly measured in supervised conditions. With the advancement of new wearable sensor technologies and data processing approaches, it is possible to accurately infer work rate and predict during activities of daily living (ADL). The main objective of this study was to develop and verify the methods required to predict and investigate the dynamics during ADL. The variables derived from the wearable sensors were used to create a predictor based on a random forest method. The temporal dynamics were assessed by the mean normalized gain amplitude (MNG) obtained from frequency domain analysis. The MNG provides a means to assess aerobic fitness. The predicted during ADL was strongly correlated (r = 0.87, P < 0.001) with the measured and the prediction bias was 0.2 ml·min(-1)·kg(-1). The MNG calculated based on predicted was strongly correlated (r = 0.71, P < 0.001) with MNG calculated based on measured data. This new technology provides an important advance in ambulatory and continuous assessment of aerobic fitness with potential for future applications such as the early detection of deterioration of physical health.
Pub.: 06 Apr '17, Pinned: 12 Apr '17
Abstract: Condensed-matter physics is the study of the collective behaviour of infinitely complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits1. This complexity is reflected in the size of the state space, which grows exponentially with the number of particles, reminiscent of the ‘curse of dimensionality’ commonly encountered in machine learning2. Despite this curse, the machine learning community has developed techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. Here, we show that modern machine learning architectures, such as fully connected and convolutional neural networks3, can identify phases and phase transitions in a variety of condensed-matter Hamiltonians. Readily programmable through modern software libraries4, 5, neural networks can be trained to detect multiple types of order parameter, as well as highly non-trivial states with no conventional order, directly from raw state configurations sampled with Monte Carlo6, 7.
Pub.: 13 Feb '17, Pinned: 12 Apr '17
Abstract: We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.
Pub.: 18 May '16, Pinned: 12 Apr '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you