BG1 BG2

Virtual talk

Accelerate ML Deployment with Amazon SageMaker + Tecton

Logos-1
Isaac Cameron

Isaac Cameron
Solutions Architect
Tecton

Arnab Sinha

Arnab Sinha
Solutions Architect
AWS

It takes a ton of engineering work to build and manage the high-performing infrastructure needed for demanding use cases like fraud detection. ML teams usually end up in a maze of brittle architecture components, with months-long deployment cycles to add even one feature.

Join AWS Solution Architect Arnab Sinha and Tecton Solution Architect Isaac Cameron for a technical discussion of how ML engineers can dramatically shorten the development & deployment timeline using Amazon SageMaker AI and Tecton. This live talk will walk through how you can:

  • Simplify your ML architecture instead of manually stitching processes together
  • Define features in your jupyter notebook hosted on SageMaker AI and deploy them in the Tecton feature platform in minutes, not months
  • Automatically orchestrate feature pipelines on Tecton-managed EMR clusters
  • Execute training jobs in SageMaker AI using up to date features
  • Scale model inference requests to 100K+ requests per second at <5ms latency using Sagemaker AI & Tecton’s feature platform

This event will also include a demo showing how to build and deploy a fraud detection system using SageMaker and Tecton. You’ll watch how you can define a feature in Python and have it running in production minutes later.

This will be a valuable discussion for ML engineers looking to reduce the complexity and time investment typically required to productionize ML systems at scale. Leave with practical insights on building performant ML applications that can meet demanding performance requirements – without the development cycles slowing you down.

Watch On-demand