Skip to content
Home » Uncover social media insights in actual time utilizing Amazon Managed Service for Apache Flink and Amazon Bedrock

Uncover social media insights in actual time utilizing Amazon Managed Service for Apache Flink and Amazon Bedrock


With over 550 million energetic customers, X (previously often known as Twitter) has turn out to be a great tool for understanding public opinion, figuring out sentiment, and recognizing rising tendencies. In an setting the place over 500 million tweets are despatched every day, it’s essential for manufacturers to successfully analyze and interpret the info to maximise their return on funding (ROI), which is the place real-time insights play a vital function.

Amazon Managed Service for Apache Flink lets you remodel and analyze streaming information in actual time with Apache Flink. Apache Flink helps stateful computation over a big quantity of knowledge in actual time with exactly-once consistency ensures. Furthermore, Apache Flink’s assist for fine-grained management of time with extremely customizable window logic permits the implementation of the superior enterprise logic required for constructing a streaming information platform. Stream processing and generative synthetic intelligence (AI) have emerged as highly effective instruments to harness the potential of actual time information. Amazon Bedrock, together with basis fashions (FMs) equivalent to Anthropic Claude on Amazon Bedrock, empowers a brand new wave of AI adoption by enabling pure language conversational experiences.

On this publish, we discover the best way to mix real-time analytics with the capabilities of generative AI and use state-of-the-art pure language processing (NLP) fashions to investigate tweets by way of queries associated to your model, product, or subject of selection. It goes past fundamental sentiment evaluation and permits firms to supply actionable insights that can be utilized instantly to reinforce buyer expertise. These embody:

  • Figuring out rising tendencies and dialogue matters associated to your model
  • Conducting granular sentiment evaluation to really perceive prospects’ opinions
  • Detecting nuances equivalent to emojis, acronyms, sarcasm, and irony
  • Recognizing and addressing considerations proactively earlier than they unfold
  • Guiding product growth based mostly on function requests and suggestions
  • Creating focused buyer segments for data campaigns

This publish takes a step-by-step method to showcase how you should use Retrieval Augmented Technology (RAG) to reference real-time tweets as a context for giant language fashions (LLMs). RAG is the method of optimizing the output of an LLM so it references an authoritative information base exterior of its coaching information sources earlier than producing a response. LLMs are educated on huge volumes of knowledge and use billions of parameters to generate unique output for duties equivalent to answering questions, translating languages, and finishing sentences. RAG extends the already highly effective capabilities of LLMs to particular domains or a corporation’s inside information base, all with out the necessity to retrain the mannequin. It’s an economical method to bettering LLM output so it stays related, correct, and helpful in varied contexts.

Answer overview

On this part, we clarify the stream and structure of the applying. We divide the stream of the applying into two elements:

  • Knowledge ingestion – Ingest information from streaming sources, convert it to vector embeddings, after which retailer them in a vector database
  • Insights retrieval – Invoke an LLM with the consumer queries to retrieve insights on tweets utilizing the RAG method

Knowledge ingestion

The next diagram describes the info ingestion stream:

  1. Course of feeds from streaming sources, equivalent to social media feeds, Amazon Kinesis Knowledge Streams, or Amazon Managed Service for Apache Kafka (Amazon MSK).
  2. Convert streaming information to vector embeddings in actual time.
  3. Retailer them in a vector database.

Knowledge is ingested from a streaming supply (for instance, X) and processed utilizing an Apache Flink utility. Apache Flink is an open supply stream processing framework. It supplies highly effective streaming capabilities, enabling real-time processing, stateful computations, fault tolerance, excessive throughput, and low latency. Apache Flink is used to course of the streaming information, carry out deduplication, and invoke an embedding mannequin to create vector embeddings.

Vector embeddings are numerical representations that seize the relationships and which means of phrases, sentences, and different information varieties. These vector embeddings can be used for semantic search or neural search to retrieve related data that can be used as context for the LLM to judge the response. After the textual content information is transformed into vectors, the vectors are endured in an Amazon OpenSearch Service area, which can be used as a vector database. Not like conventional relational databases with rows and columns, information factors in a vector database are represented by vectors with a hard and fast variety of dimensions, that are clustered based mostly on similarity.

OpenSearch Service provides scalable and environment friendly similarity search capabilities tailor-made for dealing with massive volumes of dense vector information. OpenSearch Service seamlessly integrates with different AWS companies, enabling you to construct sturdy information pipelines inside AWS. As a completely managed service, OpenSearch Service alleviates the operational overhead of managing the underlying infrastructure, whereas offering important options like approximate k-Nearest Neighbor (k-NN) search algorithms, dense vector assist, and sturdy monitoring and logging instruments by way of Amazon CloudWatch. These capabilities make OpenSearch Service an acceptable answer for functions that require quick and correct similarity-based retrieval duties utilizing vector embeddings.

This design permits real-time vector embedding, making it perfect for AI-driven functions.

Insights retrieval

The next diagram reveals the stream from the consumer facet, the place the consumer locations a question by way of the frontend and will get a response from the LLM mannequin utilizing the retrieved vector database paperwork because the context offered within the immediate.

As proven within the previous determine, to retrieve insights from the LLM, first you could obtain a question from the consumer. The textual content question is then transformed into vector embeddings utilizing the identical mannequin that was used earlier than for the tweets. It’s necessary to verify the identical embedding mannequin is used for each ingestion and search. The vector embeddings are then used to carry out a semantic search within the vector database to acquire the associated vectors and related textual content. This serves because the context for the immediate. Subsequent, the earlier dialog historical past (if any) is added to the immediate. This serves because the dialog historical past for the mannequin. Lastly, the consumer’s query can be included within the immediate and the LLM is invoked to get the response.

For the aim of this publish, we don’t take into accounts the dialog historical past or retailer it for later use.

Answer structure

Now that you simply perceive the general course of stream, let’s analyze the next structure utilizing AWS companies step-by-step.

The primary a part of the previous determine reveals the info ingestion course of:

  1. A consumer authenticates with Amazon Cognito.
  2. The consumer connects to the Streamlit frontend and configures the next parameters: question phrases, API bearer token, and frequency to retrieve tweets.
  3. Managed Service for Apache Flink is used to eat and course of the tweets in actual time and shops in Apache Flink’s state the parameters for making the API requests acquired from the frontend utility.
  4. The streaming utility makes use of Apache Flink’s async I/O to invoke the Amazon Titan Embeddings mannequin by way of the Amazon Bedrock API.
  5. Amazon Bedrock returns a vector embedding for every tweet.
  6. The Apache Flink utility then writes the vector embedding with the unique textual content of the tweet into an OpenSearch Service k-NN index.

The rest of the structure diagram reveals the insights retrieval course of:

  1. A consumer sends a question by way of the Streamlit frontend utility.
  2. An AWS Lambda operate is invoked by Amazon API Gateway, passing the consumer question as enter.
  3. The Lambda operate makes use of LangChain to orchestrate the RAG course of. As a primary step, the operate invokes the Amazon Titan Embeddings mannequin on Amazon Bedrock to create a vector embedding for the query.
  4. Amazon Bedrock returns the vector embedding for the query.
  5. As a second step within the RAG orchestration course of, the Lambda operate performs a semantic search in OpenSearch Service and retrieves the related paperwork associated to the query.
  6. OpenSearch Service returns the related paperwork containing the tweet textual content to the Lambda operate.
  7. As a final step within the LangChain orchestration course of, the Lambda operate augments the immediate, including the context and utilizing few-shot prompting. The augmented immediate, together with directions, examples, context, and question, is shipped to the Anthropic Claude mannequin by way of the Amazon Bedrock API.
  8. Amazon Bedrock returns the reply to the query in pure language to the Lambda operate.
  9. The response is shipped again to the consumer by way of API Gateway.
  10. API Gateway supplies the response to the consumer query within the Streamlit utility.

The answer is obtainable within the GitHub repo. Observe the README file to deploy the answer.

Now that you simply perceive the general stream and structure, let’s dive deeper into a few of the key steps to grasp the way it works.

Amazon Bedrock chatbot UI

The Amazon Bedrock chatbot Streamlit utility is designed to supply insights from tweets, whether or not they’re actual tweets ingested from the X API or simulated tweets or messages from the My Social Media utility.

Within the Streamlit utility, we will present the parameters that can be used to make the API requests to the X Developer API and pull the info from X. We developed an Apache Flink utility that adjusts the API requests based mostly on the offered parameters.

As parameters, you could present the next:

  • Bearer token for API authorization – That is obtained from the X Developer platform once you join to make use of the APIs.
  • Question phrases for use to filter the tweets consumed – You should use the search operators accessible within the X documentation.
  • Frequency of the request – The X fundamental API solely lets you make a request each 15 seconds. If a decrease interval is ready, the applying gained’t pull information.

The parameters are despatched to Kinesis Knowledge Streams by way of API Gateway and are consumed by the Apache Flink utility.

My Social Media UI

The My Social Media utility is a Streamlit utility that serves as a further UI. By means of this utility, customers can compose and ship messages, simulating the expertise of posting on a social media web site. These messages are then ingested into an AWS information pipeline consisting of API Gateway, Kinesis Knowledge Streams, and an Apache Flink utility. The Apache Flink utility processes the incoming messages, invokes an Amazon Bedrock embedding mannequin, and shops the info in an OpenSearch Service cluster.

To accommodate each actual X information and simulated information from the My Social Media utility, we’ve arrange separate indexes inside the OpenSearch Service cluster. This separation permits customers to decide on which information supply they wish to analyze or question. The Streamlit utility encompasses a sidebar choice known as Use X Index that acts as a toggle. When this feature is enabled, the applying queries and analyzes information from the index containing actual tweets ingested from the X API. If the choice is disabled, the applying queries and shows information from the index containing messages despatched by way of the My Social Media utility.

Apache Flink is used due to its means to scale with the growing quantity of tweets. The Apache Flink utility is chargeable for performing information ingestion as defined beforehand. Let’s dive into the small print of the stream.

Eat information from X

We use Apache Flink to course of the API parameters despatched from the Streamlit UI. We retailer the parameters in Apache Flink’s state, which permits us to change and replace the parameters with out having to restart the applying. We use the ProcessFunction to make use of Apache Flink’s inside timers to schedule the frequency of requests to fetch tweets. On this publish, we use X’s Current search API, which permits us to entry filtered public tweets posted over the past 7 days. The API response is paginated and returns a most of 100 tweets on every request in reverse chronological order. If there are extra tweets to be consumed, the response of the earlier request will return a token, which must be used within the subsequent API name. After we obtain the tweets from the API, we apply the next transformations:

  • Filter out the empty tweets (tweets with none textual content).
  • Partition the set of tweets by writer ID. This helps distribute the processing to a number of subtasks in Apache Flink.
  • Apply a deduplication logic to solely course of tweets that haven’t been processed. For this, we retailer the already processed tweet ID in Apache Flink’s state and match and filter out the tweets which have already been processed. We retailer the tweets’ ID grouped by writer ID, which might trigger the state dimension of the applying to extend. As a result of the API solely supplies tweets from the final 7 days when invoked, we’ve got launched a time-to-live (TTL) of seven days so we don’t develop the applying’s state indefinitely. You may modify this based mostly in your necessities.
  • Convert tweets into JSON objects for a later Amazon Bedrock API invocation.

Create vector embeddings

The vector embeddings are created by invoking the Amazon Titan Embeddings mannequin by way of the Amazon Bedrock API. Asynchronous invocations of exterior APIs are necessary efficiency issues when constructing a stream processing structure. Synchronous calls improve latency, cut back throughput, and is usually a bottleneck for total processing.

To invoke the Amazon Bedrock API, you’ll use the Amazon Bedrock Runtime dependency in Java, which supplies an asynchronous consumer that permits us invoke Amazon Bedrock fashions asynchronously by way of the BedrockRuntimeAsyncClient. That is invoked to create the embeddings. For this we use Apache Flink’s async I/O to make asynchronous requests to exterior APIs. Apache Flink’s async I/O is a library inside Apache Flink that lets you write asynchronous, non-blocking operators for stream processing functions, enabling higher utilization of assets and better throughput. We offer the asynchronous operate to be known as, the timeout period that determines how lengthy an asynchronous operation can take earlier than it’s thought of failed, and the utmost variety of requests that needs to be in progress at any time limit. Limiting the variety of concurrent requests makes positive that the operator gained’t accumulate an ever-growing backlog of pending requests. Nonetheless, this could trigger backpressure after the capability is exhausted. As a result of we use the timestamp of creation once we ingest into OpenSearch Service and so order gained’t have an effect on our outcomes, we will use Apache Flink’s async I/O unordered operate, permitting us to have higher throughput and efficiency. See the next code:

DataStream<JSONObject> resultStream = AsyncDataStream

.unorderedWait(inputJSON, new BedRockEmbeddingModelAsyncTweetFunction(), 15000, TimeUnit.MILLISECONDS, 1000)
.uid("tweet-async-function");

Let’s have a more in-depth look into the Apache Flink async I/O operate. The next is inside the CompletableFuture Java class:

  1. First, we create the Amazon Bedrock Runtime async consumer:
BedrockRuntimeAsyncClient runtime = BedrockRuntimeAsyncClient.builder()
.area(Area.of(area))  // Use the desired AWS area 
.construct();
  1. We then extract the tweet for the occasion and construct the payload that we’ll ship to Amazon Bedrock:
String stringBody = jsonObject.getString("tweet");

ArrayList<String> stringList = new ArrayList<>();


stringList.add(stringBody);


JSONObject jsonBody = new JSONObject()
.put("inputText", stringBody);


SdkBytes physique = SdkBytes.fromUtf8String(jsonBody.toString());
  1. After we’ve got the payload, we will name the InvokeModel API and invoke Amazon Titan to create the vector embeddings for the tweets:
InvokeModelRequest request = InvokeModelRequest.builder()
        
.modelId("amazon.titan-embed-text-v1")
        
.contentType("utility/json")
        
.settle for("*/*")
        
.physique(physique)
        
.construct();

CompletableFuture<InvokeModelResponse> futureResponse = runtime.invokeModel(request);
  1. After receiving the vector, we append the next fields to the output JSONObject:
    1. Cleaned tweet
    2. Tweet creation timestamp
    3. Variety of likes of the tweet
    4. Variety of retweets
    5. Variety of views from the tweet (impressions)
    6. Tweet ID
// Extract and course of the response when it's accessible
JSONObject response = new JSONObject(
        futureResponse.be part of().physique().asString(StandardCharsets.UTF_8)
);

// Add further fields associated to tweet information to the response
response.put("tweet", jsonObject.get("tweet"));
response.put("@timestamp", jsonObject.get("created_at"));
response.put("likes", jsonObject.get("likes"));
response.put("retweet_count", jsonObject.get("retweet_count"));
response.put("impression_count", jsonObject.get("impression_count"));
response.put("_id", jsonObject.get("_id"));

return response;

It will return the embeddings, unique textual content, further fields, and the variety of tokens used for the embedding. In our connector, we’re solely consuming messages in English, in addition to ignoring messages which might be retweets from different tweets.

The identical processing steps are replicated for messages coming from the My Social Media utility (manually ingested).

Retailer vector embeddings in OpenSearch Service

We use OpenSearch Service as a vector database for semantic search. Earlier than we will write the info into OpenSearch Service, we have to create an index that helps semantic search. We’re utilizing the k-NN plugin. The vector database index mapping ought to have the next properties for storing vectors for similarity search:

"embeddings": {
        "sort": "knn_vector",
        "dimension": 1536,
        "technique": {
          "title": "hnsw",
          "space_type": "l2",
          "engine": "nmslib",
          "parameters": {
            "ef_construction": 128,
            "m": 24
          }
        }
      }

The important thing parameters are as follows:

  • sort – This specifies that the sector will maintain vector information for a k-NN similarity search. The worth needs to be knn_vector.
  • dimension – The variety of dimensions for every vector. This should match the mannequin dimension. On this case we use 1,536 dimensions, the identical because the Amazon Titan Textual content Embeddings v1 mannequin.
  • technique – Defines the algorithm and parameters for indexing and looking out the vectors:
    • title – The identifier for the closest neighbor technique. We use hierarchical navigable small worlds (HNSW)—a hierarchical proximity graph method—to run a approximate k-NN (A-NN) search as a result of normal k-NN shouldn’t be a scalable method.
    • space_type – The vector area used to calculate the gap between vectors. It helps a number of area sort. The default worth is 12.
    • engine – The approximate k-NN library to make use of for indexing and search. The accessible libraries are faiss, nmslib, and Lucene.
    • ef_construction – The dimensions of the dynamic record used throughout k-NN graph creation. Greater values end in a extra correct graph however slower indexing pace.
    • m – The variety of bidirectional hyperlinks that the plugin creates for every new aspect. Rising and reducing this worth can have a big affect on reminiscence consumption. Preserve this worth between 2–100.

Customary k-NN search strategies compute similarity utilizing a brute-force method that measures the closest distance between a question and numerous factors, which produces precise outcomes. This works effectively for many functions. Nonetheless, within the case of extraordinarily massive datasets with excessive dimensionality, this creates a scaling downside that reduces the effectivity of the search. The approximate k-NN search strategies utilized by OpenSearch Service use approximate nearest neighbor (ANN) algorithms from the nmslib, faiss, and Lucene libraries to energy k-NN search. These search strategies make use of ANN to enhance search latency for giant datasets. Of the three search strategies the k-NN plugin supplies, this technique provides the very best search scalability for giant datasets. This method is the popular technique when a dataset reaches a whole bunch of 1000’s of vectors. For extra details about the totally different strategies and their trade-offs, discuss with Complete Information To Approximate Nearest Neighbors Algorithms.

To make use of the k-NN plugin’s approximate search performance, we should first create a k-NN index with index.knn set to true:

    "settings" : {
      "index" : {
        "knn": true,
        "number_of_shards" : "5",
        "number_of_replicas" : "1"
      }
    }

After we’ve got our indexes created, we will sink the info from our Apache Flink utility into OpenSearch.

RetrievalQA utilizing Lambda and LangChain implementation

For this half, we take an enter query from the consumer and invoke a Lambda operate. The Lambda operate retrieves related tweets from OpenSearch Service as context and generates a solution utilizing the LangChain RAG chain RetrievalQA. LangChain is a framework for growing functions powered by language fashions.

First, some setup. We instantiate the bedrock-runtime consumer that may enable the Lambda operate to invoke the fashions:

bedrock_runtime = boto3.consumer("bedrock-runtime", "us-east-1")

embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1", consumer=bedrock_runtime)

The BedrockEmbeddings class makes use of the Amazon Bedrock API to generate embeddings for the consumer’s enter query. It strips new line characters from the textual content. Discover that we have to go as an argument the instantiation of the bedrock_runtime consumer and the mannequin ID for the Amazon Titan Textual content Embeddings v1 mannequin.

Subsequent, we instantiate the consumer for the OpenSearchVectorSeach LangChain class that may enable the Lambda operate to connect with the OpenSearch Service area and carry out the semantic search in opposition to the beforehand listed X embeddings. For the embedding operate, we’re passing the embeddings mannequin that we outlined beforehand. This can be used in the course of the LangChain orchestration course of:

os_client = OpenSearchVectorSearch(
        index_name=aos_index,
        embedding_function=embeddings,
        http_auth=(os.environ['aosUser'], os.environ['aosPassword']),
        opensearch_url=os.environ['aosDomain'],
        timeout=300,
        use_ssl=True,
        verify_certs=True,
        connection_class=RequestsHttpConnection,
        )

We have to outline the LLM mannequin from Amazon Bedrock to make use of for textual content era. The temperature is ready to 0 to cut back hallucinations:

model_kwargs={"temperature": 0, "max_tokens": 4096}

llm = BedrockChat(
    model_id="anthropic.claude-3-haiku-20240307-v1:0",
    consumer=bedrock_runtime,
    model_kwargs=model_kwargs
)

Subsequent, in our Lambda operate, we create the immediate to instruct the mannequin on the particular activity of analyzing a whole bunch of tweets within the context. To normalize the output, we use a immediate engineering method known as few-shot prompting. Few-shot prompting permits language fashions to be taught and generate responses based mostly on a small variety of examples or demonstrations offered within the immediate itself. On this method, as a substitute of coaching the mannequin on a big dataset, we offer a number of examples of the specified activity or output inside the immediate. These examples function a information or conditioning for the mannequin, enabling it to grasp the context and the specified format or sample of the response. When introduced with a brand new enter after the examples, the mannequin can then generate an applicable response by following the patterns and context established by the few-shot demonstrations within the immediate.

As a part of the immediate, we then present examples of questions and solutions, so the chatbot can comply with the identical sample when used (see the Lambda operate to view the whole immediate):

template = """As a useful agent that's an professional analysing tweets, please reply the query utilizing solely the offered tweets from the context in <context></context> tags. When you do not see priceless data on the tweets offered within the context in <context></context> tags, say you do not have sufficient tweets associated to the query. Cite the related context you used to construct your reply. Print in a bullet level record the highest most influential tweets from the context on the finish of the response.
    
    Discover beneath some examples:
    <example1>
    query: 
    What are the principle challenges or considerations talked about in tweets about utilizing Bedrock as a generative AI service on AWS, and the way can they be addressed?
    
    reply:
    Primarily based on the tweets offered within the context, the principle challenges or considerations talked about about utilizing Bedrock as a generative AI service on AWS are:

1.	...
2.	...
3.	...
4.	...
...
    
    To handle these considerations:

1.	...
2.	...
3.	...
4.	...
...

    High tweets from context:

    [1] ...
    [2] ...
    [3] ...
    [4] ...

    </example1>
    
    <example2>
    ...
    </example2>
    
    Human: 
    
    query: {query}
    
    <context>
    {context}
    </context>
    
    Assistant:"""

    immediate = PromptTemplate(input_variables=["context","question"], template=template)

We then create the RetrievalQA LangChain chain utilizing the immediate template, Anthropic Claude on Amazon Bedrock, and the OpenSearch Service retriever configured beforehand. The RetrievalQA LangChain chain will orchestrate the next RAG steps:

  • Invoke the textual content embedding mannequin to create a vector for the consumer’s query
  • Carry out a semantic search on OpenSearch Service utilizing the vector to retrieve the related tweets to the consumer’s query (ok=200)
  • Invoke the LLM mannequin utilizing the augmented immediate containing the immediate template, context (stuffed retrieved tweets) and query
chain = RetrievalQA.from_chain_type(
    llm=llm,
    verbose=True,
    chain_type="stuff",
    retriever=os_client.as_retriever(
        search_type="similarity",
        search_kwargs={
            "ok": 200, 
            "space_type": "l2", 
            "vector_field": "embeddings", 
            "text_field": text_field
        }
    ),
    chain_type_kwargs = {"immediate": immediate}
)

Lastly, we run the chain:

reply = chain.invoke({"question": message})

The response from the LLM is shipped again to the consumer utility. As proven within the following screenshot:

Concerns

You may lengthen the answer offered on this publish. Once you do, contemplate the next strategies:

  • Configure index retention and rollover in OpenSearch Service to handle index lifecycle and information retention successfully
  • Incorporate chat historical past into the chatbot to supply richer context and enhance the relevance of LLM responses
  • Add filters and hybrid search with the chance to change the load given to the key phrase and semantic search to reinforce search on RAG
  • Modify the TTL for Apache Flink’s state to match your necessities (the answer on this publish makes use of 7 days)
  • Allow logging to API Gateway and within the Streamlit utility.

Abstract

This publish demonstrates the best way to mix real-time analytics with generative AI capabilities to investigate tweets associated to a model, product, or subject of curiosity. It makes use of Amazon Managed Service for Apache Flink to course of tweets from the X API, create vector embeddings utilizing the Amazon Titan Embeddings mannequin on Amazon Bedrock, and retailer the embeddings in an OpenSearch Service index configured for vector similarity search—all these steps occur in actual time.

The publish additionally explains how customers can enter queries by way of a Streamlit frontend utility, which invokes a Lambda operate. This Lambda operate retrieves related tweets from OpenSearch Service by performing semantic search on the saved embeddings utilizing the LangChain RetrievalQA chain. Because of this, it generates insightful solutions utilizing the Anthropic Claude LLM on Amazon Bedrock.

The answer permits figuring out tendencies, conducting sentiment evaluation, detecting nuances, addressing considerations, guiding product growth, and creating focused buyer segments based mostly on real-time X information.

To get began with generative AI, go to Generative AI on AWS for details about trade use circumstances, instruments to construct and scale generative AI functions, in addition to the publish Exploring real-time streaming for generative AI Purposes for different use circumstances for streaming with generative AI.


Concerning the Authors

Francisco Morillo is a Streaming Options Architect at AWS, specializing in real-time analytics architectures. With over 5 years within the streaming information area, Francisco has labored as a knowledge analyst for startups and as an enormous information engineer for consultancies, constructing streaming information pipelines. He has deep experience in Amazon Managed Streaming for Apache Kafka (Amazon MSK) and Amazon Managed Service for Apache Flink. Francisco collaborates carefully with AWS prospects to construct scalable streaming information options and superior streaming information lakes, guaranteeing seamless information processing and real-time insights.

Sergio Garcés Vitale is a Senior Options Architect at AWS, keen about generative AI. With over 10 years of expertise within the telecommunications trade, the place he helped construct information and observability platforms, Sergio now focuses on guiding Retail and CPG prospects of their cloud adoption, in addition to prospects throughout all industries and sizes in implementing Synthetic Intelligence use circumstances.

Subham Rakshit is a Senior Streaming Options Architect for Analytics at AWS based mostly within the UK. He works with prospects to design and construct streaming architectures to allow them to get worth from analyzing their streaming information. His two little daughters hold him occupied more often than not exterior work, and he loves fixing jigsaw puzzles with them. Join with him on LinkedIn.



Leave a Reply

Your email address will not be published. Required fields are marked *