Running a vector database in the cloud will be easier now that Pinecone is offering its vector database as a serverless offerings in Google Cloud and Microsoft Azure, to go along with an existing serverless product for AWS. The company also announced new enterprise features, including bulk import from object storage, among other features.
The advent of generative AI has supercharged the market for vector databases, which excel at storing, indexing, and storing vector embeddings. First used to bolster search with the power of nearest neighbor algorithms, companies today are scrambling to deploy vector databases as part of retrieval-augmented generation (RAG) setups that use pre-indexed vector embeddings to “ground” a large language model (LLM) with a customer’s own data.
Pinecone’s native vector database today is seeing a surge of interest from companies building GenAI applications, such as chatbots, question-answer systems, and co-pilots. As one of the older vector databases on the market (established 2019), Pinecone’s offering is well-regarded by analyst groups, and today’s announcements are likely to bolster that standing.
With serverless vector database offerings in all three major public clouds (it announced the general availability of its AWS serverless offering in May), Pinecone is positioned to capitalize on the current wave of investment in GenAI, much of which is occurring in the public cloud.
“Bringing Pinecone’s serverless vector database to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the platform on Google Cloud’s trusted, global infrastructure,” Dai Vu, the managing director of marketplace and ISV go-to-market programs at Google Cloud, said in a blog today. “Pinecone customers can now easily build knowledgeable AI applications securely and at scale as they progress their digital transformation journeys.”
In addition to running its serverless offering on Microsoft Azure, Pinecone also integrates with the Azure OpenAI Service, which allows users to develop Gen AI applications faster by accessing OpenAI’s models and Pinecone serverless within the same Azure environment, Pinecone says. The company also notes that it has recently rolled out the first version of its .NET SDK, providing Azure developers the ability to build using native Microsoft languages.
In a typical serverless application, the customer is unburdened from worrying about the underlying server and the management thereof. But Pinecone’s serverless implementation goes beyond the typical setup, according to a blog post by Pinecone VP of R&D Ram Sriharsha earlier this year.
“Before Pinecone serverless, vector databases had to keep the entire index locally on the shards,” he writes in the January 16 blog post. “Such an architecture makes sense when you are running thousands of queries per second spread out across your entire corpus, but not for on-demand queries over large datasets where only a portion of your corpus is relevant for any query.”
“To drive order of magnitude cost savings to this workflow, we need to design vector databases that go beyond scatter-gather and likewise can effectively page portions of the index as needed from persistent, low-cost storage. That is, we need true decoupling of storage from compute for vector search.”
Pinecone also unveiled several new features in its serverless offerings today, new role-based access controls (RBAC); enhancements to backups; bulk import from object storage; and a new SDK.
The bulk import will lower the cost of the initial data load into Pinecone by 6x, the company says. That should help Pinecone customers ramp up their proofs of concepts (POCs) and production implementation.
“As an asynchronous, long-running operation, there’s no need for performance tuning or monitoring the status of your import operation,” Pinecone engineers Ben Esh and Gibbs Cullen write in the blog. “Just set it and forget it; Pinecone will handle the rest.”
Related Items:
Vectors: Coming to a Database Near You
Forrester Slices and Dices the Vector Database Market
November 22, 2024
- DataOps.live Achieves SOC 2 Type II Compliance
- LogicMonitor Gains $800M in Strategic Investment to Scale Global Operations
November 21, 2024
- Snowflake Agrees to Acquire Open Data Integration Platform, Datavolo
- Denodo Platform 9.1 Brings New Advanced AI Capabilities and Enhanced Data Lakehouse Performance
- Teradata AI Unlimited in Microsoft Fabric Public Preview Now Available Through Microsoft Fabric Workload Hub
- Zilliz Cloud Powers GenAI Readiness with Cost-Effective Enterprise-Grade Performance and Scalability
- Snowflake and Anthropic Team Up to Bring Claude Models Directly to the AI Data Cloud
- Duality AI Launches EDU Subscription to Empower Aspiring AI Developers with Digital Twin Simulation and Synthetic Data Skills
- Striim Offers Mirroring Solution for SQL Server to Fabric at Microsoft Ignite
November 20, 2024
- Anaconda Unites Teams Across Data Skill Levels With Anaconda Toolbox for Excel
- StarTree Unveils Innovations to Tackle Real-Time Data Scaling Challenges
- Introducing Crunchy Data Warehouse, a Modern Postgres Analytics Platform
- Zettar Advances Data Movement in Collaboration with MiTAC Computing and NVIDIA
- Matillion Leverages Simbian’s AI to Streamline Security and Boost Efficiency
- CData Launches Free Connect Spreadsheets Product to Simplify Access to Enterprise Data for Excel and Google Sheets Users
- Graphwise Introduces GraphDB 10.8 with Multi-Method RAG for GenAI Applications