Better experiences. Made possible by you.

Be yourself. Grow your own way. Work on interesting projects.

Be yourself. Grow your own way. Work on interesting projects.

Databricks & GCP Data Platform Architect

Contract Type:

Brick and Mortar

Location:

Hyderabad - //TS

Date Published:

04-15-2026

Job ID:

REF40279A

Company Description:

About Sutherland

Artificial Intelligence. Automation. Cloud engineering. Advanced analytics. For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence.

We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model.

For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA. We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships.

Sutherland
Unlocking digital performance. Delivering measurable results.

 

Job Description:

We are looking for a hands-on Databricks & GCP Data Platform Architect who will design and personally implement scalable Lakehouse solutions on Google Cloud Platform (GCP).

This role requires deep technical involvement, including building pipelines, configuring Databricks, and troubleshooting production issues, in addition to architecture ownership.

Key Responsibilities

1. Architecture & Hands-on Implementation

  • Design end-to-end Databricks Lakehouse architecture on GCP
  • Hands-on implementation of:
    • Databricks workspaces, clusters, jobs, and workflows
    • Delta Lake–based Bronze / Silver / Gold data layers
    • Batch and streaming pipelines using Spark and Databricks
  • Create reference implementations and reusable frameworks for teams
  • Actively participate in coding, reviews, and production deployments

2. Data Engineering (Hands-on)

  • Build and optimize Spark jobs and Databricks notebooks
  • Implement ingestion pipelines from:
    • Databases and enterprise applications
    • Streaming sources (Pub/Sub, Kafka)
    • External and SaaS systems
  • Perform performance tuning and cost optimization
  • Troubleshoot pipeline failures and production issues directly

3. Security, Governance & Compliance

  • Implement(not just define) governance using Unity Catalog
  • Configure access control integrated with GCP IAM
  • Set up secure networking (VPC, private endpoints)
  • Enable audit logging, lineage, and data classification
  • Work closely with security teams to operationalize standards

4. DevOps, Automation & Operations (Hands-on)

  • Build CI/CD pipelines for Databricks notebooks, jobs, and configs
  • Implement Infrastructure as Code using Terraform
  • Set up monitoring, alerting, and operational dashboards
  • Participate in production support, root-cause analysis, and fixes
  • Drive hands-on cost optimization initiatives

5. Stakeholder Collaboration

  • Translate business requirements into implemented solutions
  • Guide and mentor data engineers through code-level support
  • Conduct architecture and code reviews
  • Act as a technical owner from design through production

Required Skills & Experience

Must Have

  • Strong hands-on experience with Databricks (Apache Spark)
  • Proven experience building and deploying Lakehouse architectures
  • Hands-on experience with GCP, including:
    • Google Cloud Storage (GCS)
    • BigQuery
    • Pub/Sub
    • IAM & VPC basics
  • Experience implementing batch and streaming pipelines
  • Strong troubleshooting and production support skills

Good to Have

  • Unity Catalog, Delta Live Tables
  • CI/CD, Git, Terraform
  • MLflow, Vertex AI exposure
  • Multi-cloud Databricks experience (Azure / AWS)

Qualifications:

  • 8–12 years of experience in data engineering / data platforms
  • 3+ years in a hands-on architect or senior technical lead role

Additional Information:

All your information will be kept confidential according to EEO guidelines.

Apply Now
Career Path
Work at Home

Share this job

Interested in this job?
Save Job
Create As Alert

Similar Opportunities:

SCHEMA MARKUP ( This text will only show on the editor. )