Migrate from Apache Solr to OpenSearch

[ad_1]

OpenSearch is an open supply, distributed search engine appropriate for a wide selection of use-cases resembling ecommerce search, enterprise search (content material administration search, doc search, information administration search, and so forth), web site search, software search, and semantic search. It’s additionally an analytics suite that you should utilize to carry out interactive log analytics, real-time software monitoring, safety analytics and extra. Like Apache Solr, OpenSearch offers search throughout doc units. OpenSearch additionally contains capabilities to ingest and analyze information. Amazon OpenSearch Service is a totally managed service that you should utilize to deploy, scale, and monitor OpenSearch within the AWS Cloud.

Many organizations are migrating their Apache Solr based mostly search options to OpenSearch. The principle driving elements embrace decrease whole value of possession, scalability, stability, improved ingestion connectors (resembling Information Prepper, Fluent Bit, and OpenSearch Ingestion), elimination of exterior cluster managers like Zookeeper, enhanced reporting, and wealthy visualizations with OpenSearch Dashboards.

We suggest approaching a Solr to OpenSearch migration with a full refactor of your search resolution to optimize it for OpenSearch. Whereas each Solr and OpenSearch use Apache Lucene for core indexing and question processing, the techniques exhibit completely different traits. By planning and operating a proof-of-concept, you may guarantee one of the best outcomes from OpenSearch. This weblog put up dives into the strategic issues and steps concerned in migrating from Solr to OpenSearch.

Key variations

Solr and OpenSearch Service share basic capabilities delivered by means of Apache Lucene. Nonetheless, there are some key variations in terminology and performance between the 2:

  • Assortment and index: In OpenSearch, a group known as an index.
  • Shard and duplicate: Each Solr and OpenSearch use the phrases shard and duplicate.
  • API-driven Interactions: All interactions in OpenSearch are API-driven, eliminating the necessity for guide file adjustments or Zookeeper configurations. When creating an OpenSearch index, you outline the mapping (equal to the schema) and the settings (equal to solrconfig) as a part of the index creation API name.

Having set the stage with the fundamentals, let’s dive into the 4 key elements and the way every of them may be migrated from Solr to OpenSearch.

Assortment to index

A group in Solr known as an index in OpenSearch. Like a Solr assortment, an index in OpenSearch additionally has shards and replicas.

Though the shard and duplicate idea is analogous in each the various search engines, you should utilize this migration as a window to undertake a greater sharding technique. Dimension your OpenSearch shards, replicas, and index by following the shard technique finest practices.

As a part of the migration, rethink your information mannequin. In analyzing your information mannequin, you could find efficiencies that dramatically enhance your search latencies and throughput. Poor information modeling doesn’t solely end in search efficiency issues however extends to different areas. For instance, you may discover it difficult to assemble an efficient question to implement a specific function. In such circumstances, the answer usually includes modifying the info mannequin.

Variations: Solr permits major shard and duplicate shard collocation on the identical node. OpenSearch doesn’t place the first and duplicate on the identical node. OpenSearch Service zone consciousness can robotically make sure that shards are distributed to completely different Availability Zones (information facilities) to additional enhance resiliency.

The OpenSearch and Solr notions of duplicate are completely different. In OpenSearch, you outline a major shard rely utilizing number_of_primaries that determines the partitioning of your information. You then set a duplicate rely utilizing number_of_replicas. Every duplicate is a duplicate of all the first shards. So, should you set number_of_primaries to five, and number_of_replicas to 1, you’ll have 10 shards (5 major shards, and 5 duplicate shards). Setting replicationFactor=1 in Solr yields one copy of the info (the first).

For instance, the next creates a group known as check with one shard and no replicas.

http://localhost:8983/solr/admin/collections?
  _=motion=CREATE
  &maxShardsPerNode=2
  &title=check
  &numShards=1
  &replicationFactor=1
  &wt=json

In OpenSearch, the next creates an index known as check with 5 shards and one duplicate

PUT check
{
  "settings": {
    "number_of_shards": 5,
    "number_of_replicas": 1
  }
}

Schema to mapping

In Solr schema.xml OR managed-schema has all the sector definitions, dynamic fields, and replica fields together with subject kind (textual content analyzers, tokenizers, or filters). You employ the schema API to handle schema. Or you may run in schema-less mode.

OpenSearch has dynamic mapping, which behaves like Solr in schema-less mode. It’s not essential to create an index beforehand to ingest information. By indexing information with a brand new index title, you create the index with OpenSearch managed service default settings (for instance: "number_of_shards": 5, "number_of_replicas": 1) and the mapping based mostly on the info that’s listed (dynamic mapping).

We strongly suggest you go for a pre-defined strict mapping. OpenSearch units the schema based mostly on the primary worth it sees in a subject. If a stray numeric worth is the primary worth for what can be a string subject, OpenSearch will incorrectly map the sector as numeric (integer, for instance). Subsequent indexing requests with string values for that subject will fail with an incorrect mapping exception. You recognize your information, you realize your subject varieties, you’ll profit from setting the mapping instantly.

Tip: Take into account performing a pattern indexing to generate the preliminary mapping after which refine and tidy up the mapping to precisely outline the precise index. This strategy helps you keep away from manually developing the mapping from scratch.

For Observability workloads, it’s best to think about using Easy Schema for Observability. Easy Schema for Observability (often known as ss4o) is a commonplace for conforming to a typical and unified observability schema. With the schema in place, Observability instruments can ingest, robotically extract, and combination information and create customized dashboards, making it simpler to know the system at a better stage.

Lots of the subject varieties (information varieties), tokenizers, and filters are the identical in each Solr and OpenSearch. In spite of everything, each use Lucene’s Java search library at their core.

Let’s have a look at an instance:

<!-- Solr schema.xml snippets -->
<subject title="id" kind="string" listed="true" saved="true" required="true" multiValued="false" /> 
<subject title="title" kind="string" listed="true" saved="true" multiValued="true"/>
<subject title="handle" kind="text_general" listed="true" saved="true"/>
<subject title="user_token" kind="string" listed="false" saved="true"/>
<subject title="age" kind="pint" listed="true" saved="true"/>
<subject title="last_modified" kind="pdate" listed="true" saved="true"/>
<subject title="metropolis" kind="text_general" listed="true" saved="true"/>

<uniqueKey>id</uniqueKey>

<copyField supply="title" dest="textual content"/>
<copyField supply="handle" dest="textual content"/>

<fieldType title="string" class="solr.StrField" sortMissingLast="true" />
<fieldType title="pint" class="solr.IntPointField" docValues="true"/>
<fieldType title="pdate" class="solr.DatePointField" docValues="true"/>

<fieldType title="text_general" class="solr.TextField" positionIncrementGap="100">
<analyzer kind="index">
    <tokenizer class="solr.StandardTokenizerFactory"/>
    <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="false" />
    <filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer kind="question">
    <tokenizer class="solr.StandardTokenizerFactory"/>
    <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="false" />
    <filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>

PUT index_from_solr
{
  "settings": {
    "evaluation": {
      "analyzer": {
        "text_general": {
          "kind": "customized",
          "tokenizer": "commonplace",
          "filter": [
            "lowercase",
            "asciifolding"
          ]
        }
      }
    }
  },
  "mappings": {
    "properties": {
      "title": {
        "kind": "key phrase",
        "copy_to": "textual content"
      },
      "handle": {
        "kind": "textual content",
        "analyzer": "text_general"
      },
      "user_token": {
        "kind": "key phrase",
        "index": false
      },
      "age": {
        "kind": "integer"
      },
      "last_modified": {
        "kind": "date"
      },
      "metropolis": {
        "kind": "textual content",
        "analyzer": "text_general"
      },
      "textual content": {
        "kind": "textual content",
        "analyzer": "text_general"
      }
    }
  }
}

Notable issues in OpenSearch in comparison with Solr:

  1. _id is all the time the uniqueKey and can’t be outlined explicitly, as a result of it’s all the time current.
  2. Explicitly enabling multivalued isn’t mandatory as a result of any OpenSearch subject can comprise zero or extra values.
  3. The mapping and the analyzers are outlined throughout index creation. New fields may be added and sure mapping parameters may be up to date later. Nonetheless, deleting a subject isn’t doable. A useful ReIndex API can overcome this downside. You need to use the Reindex API to index information from one index to a different.
  4. By default, analyzers are for each index and question time. For some less-common eventualities, you may change the question analyzer at search time (within the question itself), which is able to override the analyzer outlined within the index mapping and settings.
  5. Index templates are additionally a good way to initialize new indexes with predefined mappings and settings. For instance, should you constantly index log information (or any time-series information), you may outline an index template so that every one the indices have the identical variety of shards and replicas. It can be used for dynamic mapping management and part templates

Search for alternatives to optimize the search resolution. As an illustration, if the evaluation reveals that the town subject is solely used for filtering slightly than looking out, take into account altering its subject kind to key phrase as a substitute of textual content to eradicate pointless textual content processing. One other optimization may contain disabling doc_values for the user_token subject if it’s solely meant for show functions. doc_values are disabled by default for the textual content datatype.

SolrConfig to settings

In Solr, solrconfig.xml carries the gathering configuration. All kinds of configurations pertaining to every thing from index location and formatting, caching, codec manufacturing unit, circuit breaks, commits and tlogs all the best way as much as sluggish question config, request handlers, and replace processing chain, and so forth.

Let’s have a look at an instance:

<codecFactory class="solr.SchemaCodecFactory">
<str title="compressionMode">`BEST_COMPRESSION`</str>
</codecFactory>

<autoCommit>
    <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
    <openSearcher>false</openSearcher>
</autoCommit>

<autoSoftCommit>
    <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
    </autoSoftCommit>

<slowQueryThresholdMillis>1000</slowQueryThresholdMillis>

<maxBooleanClauses>${solr.max.booleanClauses:2048}</maxBooleanClauses>

<requestHandler title="/question" class="solr.SearchHandler">
    <lst title="defaults">
    <str title="echoParams">specific</str>
    <str title="wt">json</str>
    <str title="indent">true</str>
    <str title="df">textual content</str>
    </lst>
</requestHandler>

<searchComponent title="spellcheck" class="solr.SpellCheckComponent"/>
<searchComponent title="counsel" class="solr.SuggestComponent"/>
<searchComponent title="elevator" class="solr.QueryElevationComponent"/>
<searchComponent class="solr.HighlightComponent" title="spotlight"/>

<queryResponseWriter title="json" class="solr.JSONResponseWriter"/>
<queryResponseWriter title="velocity" class="solr.VelocityResponseWriter" startup="lazy"/>
<queryResponseWriter title="xslt" class="solr.XSLTResponseWriter"/>

<updateRequestProcessorChain title="script"/>

Notable issues in OpenSearch in comparison with Solr:

  1. Each OpenSearch and Solr have BEST_SPEED codec as default (LZ4 compression algorithm). Each provide BEST_COMPRESSION instead. Moreover OpenSearch gives zstd and zstd_no_dict. Benchmarking for various compression codecs can be accessible.
  2. For close to real-time search, refresh_interval must be set. The default is 1 second which is nice sufficient for many use circumstances. We suggest growing refresh_interval to 30 or 60 seconds to enhance indexing velocity and throughput, particularly for batch indexing.
  3. Max boolean clause is a static setting, set at node stage utilizing the indices.question.bool.max_clause_count setting.
  4. You don’t want an specific requestHandler. All searches use the _search or _msearch endpoint. When you’re used to utilizing the requestHandler with default values then you should utilize search templates.
  5. When you’re used to utilizing /sql requestHandler, OpenSearch additionally permits you to use SQL syntax for querying and has a Piped Processing Language.
  6. Spellcheck, often known as Did-you-mean, QueryElevation (often called pinned_query in OpenSearch), and highlighting are all supported throughout question time. You don’t have to explicitly outline search elements.
  7. Most API responses are restricted to JSON format, with CAT APIs as the one exception. In circumstances the place Velocity or XSLT is utilized in Solr, it should be managed on the applying layer. CAT APIs reply in JSON, YAML, or CBOR codecs.
  8. For the updateRequestProcessorChain, OpenSearch offers the ingest pipeline, permitting the enrichment or transformation of knowledge earlier than indexing. A number of processor levels may be chained to type a pipeline for information transformation. Processors embrace GrokProcessor, CSVParser, JSONProcessor, KeyValue, Rename, Cut up, HTMLStrip, Drop, ScriptProcessor, and extra. Nonetheless, it’s strongly beneficial to do the info transformation outdoors OpenSearch. The perfect place to try this could be at OpenSearch Ingestion, which offers a correct framework and numerous out-of-the-box filters for information transformation. OpenSearch Ingestion is constructed on Information Prepper, which is a server-side information collector able to filtering, enriching, reworking, normalizing, and aggregating information for downstream analytics and visualization.
  9. OpenSearch additionally launched search pipelines, just like ingest pipelines however tailor-made for search time operations. Search pipelines make it simpler so that you can course of search queries and search outcomes inside OpenSearch. At present accessible search processors embrace filter question, neural question enricher, normalization, rename subject, scriptProcessor, and personalize search rating, with extra to return.
  10. The next picture reveals the right way to set refresh_interval and slowlog. It additionally reveals you the opposite doable settings.
  11. Sluggish logs may be set like the next picture however with way more precision with separate thresholds for the question and fetch phases.

Earlier than migrating each configuration setting, assess if the setting may be adjusted based mostly in your present search system expertise and finest practices. As an illustration, within the previous instance, the sluggish logs threshold of 1 second is likely to be intensive for logging, so that may be revisited. In the identical instance, max.booleanClauses is likely to be one other factor to have a look at and scale back.

Variations: Some settings are accomplished on the cluster stage or node stage and never on the index stage. Together with settings resembling max boolean clause, circuit breaker settings, cache settings, and so forth.

Rewriting queries

Rewriting queries deserves its personal weblog put up; nevertheless we need to not less than showcase the autocomplete function accessible in OpenSearch Dashboards, which helps ease question writing.

Just like the Solr Admin UI, OpenSearch additionally contains a UI known as OpenSearch Dashboards. You need to use OpenSearch Dashboards to handle and scale your OpenSearch clusters. Moreover, it offers capabilities for visualizing your OpenSearch information, exploring information, monitoring observability, operating queries, and so forth. The equal for the question tab on the Solr UI in OpenSearch Dashboard is Dev Instruments. Dev Instruments is a improvement setting that permits you to arrange your OpenSearch Dashboards setting, run queries, discover information, and debug issues.

Now, let’s assemble a question to perform the next:

  1. Seek for shirt OR shoe in an index.
  2. Create a aspect question to seek out the variety of distinctive clients. Aspect queries are known as aggregation queries in OpenSearch. Also referred to as aggs question.

The Solr question would seem like this:

http://localhost:8983/solr/solr_sample_data_ecommerce/choose?q=shirt OR shoe
  &aspect=true
  &aspect.subject=customer_id
  &aspect.restrict=-1
  &aspect.mincount=1
  &json.aspect={
   unique_customer_count:"distinctive(customer_id)"
  }

The picture beneath demonstrates the right way to re-write the above Solr question into an OpenSearch question DSL:

Conclusion

OpenSearch covers all kinds of makes use of circumstances, together with enterprise search, web site search, software search, ecommerce search, semantic search, observability (log observability, safety analytics (SIEM), anomaly detection, hint analytics), and analytics. Migration from Solr to OpenSearch is changing into a typical sample. This weblog put up is designed to be a place to begin for groups looking for steerage on such migrations.

You may check out OpenSearch with the OpenSearch Playground. You may get began with Amazon OpenSearch Service, a managed implementation of OpenSearch within the AWS Cloud.


Concerning the Authors

Aswath Srinivasan is a Senior Search Engine Architect at Amazon Internet Companies at the moment based mostly in Munich, Germany. With over 17 years of expertise in numerous search applied sciences, Aswath at the moment focuses on OpenSearch. He’s a search and open-source fanatic and helps clients and the search neighborhood with their search issues.

Jon Handler is a Senior Principal Options Architect at Amazon Internet Companies based mostly in Palo Alto, CA. Jon works carefully with OpenSearch and Amazon OpenSearch Service, offering assist and steerage to a broad vary of shoppers who’ve search and log analytics workloads that they need to transfer to the AWS Cloud. Previous to becoming a member of AWS, Jon’s profession as a software program developer included 4 years of coding a large-scale, ecommerce search engine. Jon holds a Bachelor of the Arts from the College of Pennsylvania, and a Grasp of Science and a PhD in Pc Science and Synthetic Intelligence from Northwestern College.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *