CBRAIN has a multitude of computational pipelines that allow you to perform complex "big data" analytics. The pipelines range from simple conversion operations to neuroimaging to genomics to supply chain simulation modeling.
Users can decide what data to use, what options are needed, and then launch pipelines on large-scale computing rapid processing.
CBRAIN works with an ecosystem of software making it easy for users to define, test and import their own pipelines into the system. We utilize the easy to use Boutiques JSON Standard to define execution pipelines for automatic import to CBRAIN.
Check out our documentation and begin to import your own pipelines.
CBRAIN offers a number of Data Providers for storing, organizing, and moving your data seamlessly throughout the CBRAIN ecosystem ensuring large datasets are moved efficiently and securely to perform large-scale computations.
Whether your data starts on your desktop, resides at a computing center or lives out in the cloud, it can easily become part of the CBRAIN data ecosystem.
CBRAIN provides a place where users can connect computing and cloud resources together to give users a vast pool of computation to acheive their science. CBRAIN has a partnership with Compute Canada to provide millions of computing hours for the research com-munity through our Portal.