Google has announced the launch of Vertex AI, a managed machine learning (ML) platform that allows companies to accelerate the deployment and maintenance of artificial intelligence (AI) models. Vertex AI requires nearly 80 per cent fewer lines of code to train a model versus competitive platforms, enabling data scientists and ML engineers across all levels of expertise the ability to implement Machine Learning Operations (MLOps) to efficiently build and manage ML projects throughout the entire development lifecycle.
Data scientists today grapple with the challenge of manually piecing together ML point solutions, creating a lag time in model development and experimentation, resulting in very few models making it into production. To tackle these challenges, Vertex AI brings together the Google Cloud services for building ML under one unified UI and API, to simplify the process of building, training, and deploying machine learning models at scale. In this single environment, its customers can move models from experimentation to production faster, more efficiently discover patterns and anomalies, make better predictions and decisions, and generally be more agile in the face of shifting market dynamics.
Through decades of innovation and strategic investment in AI at Google, the company has learned important lessons on how to build, deploy, and maintain ML models in production. Those insights and engineering have been baked into the foundation and design of Vertex AI, and will be continuously enriched by the new innovation coming out of Google Research. Now, for the first time, with Vertex AI, data science and ML engineering, teams can:
Access the AI toolkit used internally to power Google that includes computer vision, language, conversation and structured data, continuously enhanced by Google Research.
Deploy more, useful AI applications, faster with new MLOps features like Vertex Vizier, which increases the rate of experimentation, the fully managed Vertex Feature Store to help practitioners serve, share, and reuse ML features, and Vertex Experiments to accelerate the deployment of models into production with faster model selection. If your data needs to stay on device or on-site, Vertex ML Edge Manager (currently in experimental phase) has been designed to deploy and monitor models on the edge with automated processes and flexible APIs.
Manage models with confidence by removing the complexity of self-service model maintenance and repeatability with MLOps tools like Vertex Model Monitoring, Vertex ML Metadata and Vertex Pipelines to streamline the end-to-end ML workflow.
“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” says Andrew Moore, Vice President and General Manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”
“Enterprise data science practitioners hoping to put AI to work across the enterprise aren’t looking to wrangle tooling. Rather, they want tooling that can tame the ML lifecycle. Unfortunately, that is no small order,” says Bradley Shimmin, chief analyst for AI Platforms, Analytics and Data Management at Omdia. “It takes a supportive infrastructure capable of unifying the user experience, plying AI itself as a supportive guide, and putting data at the very heart of the process, all while encouraging the flexible adoption of diverse technologies.”
- Secure 2024: AI’s impact on cybersecurity with Integrity360Technology & AI
- Cyber threats will continue to impact critical servicesOperational Security
- Fake Bard AI malware: Google seeks to uncover cybercriminalsTechnology & AI
- 5 minutes with: Arash Ghazanfari, UK CTO, Dell TechnologiesTechnology & AI