Description
The Salesforce Engineering team is looking for a Senior Data Engineer (SMTS) with experience in distributed systems to join us.
You will be working cross-functionally with engineers, architects, and product managers to build the breakthrough features that our internal customers will love, adopt and use while ensuring stable and scalable applications. You'll be a part of a modern, lean, self-governing product engineering team where you have the ability to switch hats between coding to requirements gathering, to testing for quality and performance.
The team develops intelligent, data driven tools which enable strategic decision-making pertaining to Salesforce infrastructure expenditure and capacity management. We are building a platform that provides near real-time monitoring of cost and capacity utilization of the infrastructure, which will help in optimizing resource allocation and minimizing costs. We apply advanced machine learning techniques to turn the petabytes of data generated by our global infrastructure into actionable predictions and business insights used by capacity planners, internal service owners, and technical leaders daily. As an internal tooling team, engineers are expected to directly interact with customers to develop requirements and design, release and maintain distributed systems with visibility throughout Salesforce.
This is a fantastic opportunity for someone who is passionate about building scalable, resilient, distributed systems that collect, process, and analyze massive volumes of operational data. The skillset includes possessing strong data architecture, ETL, SQL and a proven track record working with enterprise metrics to build automated data pipelines with deep proficiency in Big Data Tech stack such as Spark, Trino, Hive, Airflow.
Responsibilities
Develop, automate, enhance, maintain scalable ETL pipelines
Independently design & develop resilient and reusable automation data frameworks.
Lead data pipeline development being delivered by multiple engineers
Lead and participate in requirement gathering, design, and development of complex datasets
Mentor team members in all aspects of the data maturity lifecycle
Work and collaborate with global teams across AMER, and APAC.
Required Skills/Experience
Bachelors degree in Computer Science.
5+ years of experience in data engineering, data modelling, automation and analytics
Deep understanding of data engineering tech stack, database designs, associated tools, system components, internal processes and architecture.
Experience working as a technical lead/solution architect in a customer-focused team
Must be able to strategically communicate status and identify risks
Self-starter, highly motivated, able to shift directions quickly when priorities change, think through problems to come up with innovative solutions and deliver against tight deadlines.
Must be results oriented and able to move forward with minimal direction
Experience in distributed SQL analytics engines such as Spark and Trino
Deep proficiency in Big Data technologies like Spark, Trino, Hive, Airflow, Docker
Experience working in Agile and Scrum methodology, incremental delivery, and CI/CD
Experienced in cloud provider platforms such as AWS, Azure, GCP
Experience with data visualization tools like Tableau is a Plus
Experience in FinOps engineering is a big plus