Better experiences. Made possible by you.

Be yourself. Grow your own way. Work on interesting projects.

Be yourself. Grow your own way. Work on interesting projects.

Data Engineer

Contract Type:

Brick and Mortar

Location:

Hyderabad - TS

Date Published:

11-14-2025

Job ID:

REF36880O

Company Description:

About Sutherland

Artificial Intelligence. Automation. Cloud engineering. Advanced analytics. For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence.

We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model.

For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA. We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships.

Sutherland
Unlocking digital performance. Delivering measurable results.

 

Job Description:

As a Data Engineer within our Enterprise Data and Analytics team, you’ll be at the forefront of a company-wide digital transformation. We’re looking for a forward-thinking engineer who brings deep technical expertise and a passion for building scalable, AI-ready data platforms. In this role, you’ll architect and implement modern data solutions that power intelligent applications, streamline operations, and deliver measurable business value. You’ll work closely with cross-functional teams to design robust pipelines, integrate diverse data sources, and enable advanced analytics and generative AI capabilities.

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines and ETL processes using Azure Data Factory, Azure Data Bricks, and other Azure services.
  • Implement and optimize data storage solutions using Azure Data Lake, Azure SQL Database, and Delta Lake to support analytics and reporting needs.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality, reliable datasets.
  • Ensure data security, compliance, and governance by applying best practices, role-based access control (RBAC), and encryption.
  • Monitor and troubleshoot data workflows, ensuring performance, reliability, and cost-efficiency across Azure cloud environments.
  •  Automate data integration and transformation tasks using Azure Functions, Logic Apps, and scripting languages like Python.
  • Mentor junior engineers and foster a collaborative team environment.

Qualifications:

  • Bachelor’s degree in computer engineering, Computer Science, or a related discipline
  • 7+ years of experience in ETL design, development, and performance tuning using the Microsoft Stack in a multi-dimensional data warehousing environment.
  • 7+ years of advanced SQL programming expertise (PL/SQL, T-SQL)
  • 5+ years of experience in Enterprise Data & Analytics solution architecture
  • 3+ years of experience in Python Programming
  • 3+ years of hands-on experience with Azure, especially for data-heavy/analytics applications leveraging relational and NoSQL databases, Data Warehousing, and Big Data solutions.
  • 3+ years of experience with key Azure services: Azure Data Factory, Data Lake Gen2, Analysis Services, Databricks, Blob Storage, SQL Database, Cosmos DB, App Service, Logic Apps, and Functions
  • 2+ years of experience designing data models aligning with business requirements and analytics needs.
  • 2+ years of experience defining and implementing data security standards, including encryption, auditing, and monitoring.
  • Strong analytical skills and a passion for intellectual curiosity

Preferred Skills:

  • Familiarity with DevOps processes (CI/CD) and infrastructure as code
  • Knowledge of Master Data Management (MDM) and Data Quality tools
  • Experience developing REST APIs using Java Spring Boot, Python
  • Experience building and supporting AL/ML software solutions
  • Familiarity with stream-processing systems (e.g., Event Hubs, Storm, Spark-Streaming)
  • Experience with API integrations (RESTful, SOAP) for both internal and external systems to enhance data flow and automation.
  • Experience working in Agile environments, familiarity with Agile tools like Jira or Azure DevOps.
  • Experience in data and analytics within the Life Sciences industry is a plus.

 

Additional Information:

All your information will be kept confidential according to EEO guidelines.

Apply Now
Career Path
Work at Home

Partagez ce poste

Intéressé(e) par ce poste ?
Enregistrez le travail
Créez une alerte

Opportunités similaires :

SCHEMA MARKUP ( This text will only show on the editor. )