You can now run OpenSearch version 2.13 in Amazon OpenSearch Service. With OpenSearch 2.13, we have made several improvements to search performance and resiliency, OpenSearch Dashboards, and added new features to help you build AI-powered applications. We have introduced concurrent segment search that allows users to query index segments in parallel at the shard level. This offers improved latency for long-running requests that contain aggregations or large ranges. You can now index quantized vectors with FAISS-engine-based k-NN indexes, with potential to reduce memory footprint by as much as 50 percent with minimal impact to accuracy and latency. I/O-based admission control proactively monitors and prevents I/O usage breaches to further improve the resilience of the cluster. Finally, this release introduces tracking of high-latency queries with the top N queries feature. This launch also introduces several features that enable you to build and deploy AI-powered search applications. The new flow framework, helps you to automate the configuration of search and ingest pipeline resources required by advanced search features like semantic, multimodal, and conversational search. This adds to existing capabilities for automating ml-commons resource setup, allowing you to package OpenSearch AI solutions into portable templates. Additionally, we’ve added predefined templates to automate setup for models that are integrated through our connectors to APIs like OpenAI, Amazon Bedrock, and Cohere that enable you to build solutions like semantic search. For information on upgrading to OpenSearch 2.13, please see this documentation. OpenSearch 2.13 is now available in all AWS Regions where Amazon OpenSearch Service is available.
Source:: Amazon AWS