Data Architect and Data Engineer Requirements with 3-11 years | Bangalore | Hybrid working
Data Engineer - JD
We are looking for:
- 3 to 5 years of relevant data engineering hands-on work experience in data ingestion, processing, exploratory analysis to build solutions that deliver value through data as an asset.
- Data engineer build, test, and deploy data pipelines efficiently and reliably move data across systems and should be top of latest architectural trends on AZURE cloud.
- Folks who understand parallel and distributed processing, storage, concurrency, fault-tolerant systems.
- Folks who thrive on new technologies, able to adapt and learn easily to meet the needs of next-generation engineering challenges.
Technical Skills (Must-Have)
- Applied experience with distributed data processing frameworks Spark, Databricks with Python and SQL.
- Must have worked at least 2 end-end data analytics projects with Databricks Configuration, UnityCatalog, Delta Sharing, and medallion architecture.
- Applied experience with Azure Data services ADLS, Delta Lake, Delta Live Tables, Azure Storage, RBAC.
- Applied experience with Unit Testing and System Integration Testing using Python framework.
- Applied experience with DevOps to design and deploy CI/CD pipelines using Jenkins.
- AZURE Data Engineering(DP203) or DATABRICKS certification.
- Prior working experience with high-performance agile team- Scrum, JIRA, JFrog, and Confluence.
Nice to have:
- IoT data-driven product/platform development environment background is a plus.
- Medical and Healthcare domain expertise is a plus.
Data Architect JD
- 5 to 11+ years of experience in Data Engineering platform.
- Architecting expertise- Data Architecture, ADF, ADLS.
- Hands-on experience on Azure cloud and Pyspark.
- Expert in ETL.
- Knowledge and Hands-on Experience (min 6 months) on Data cataloguing, Data Governance.
- Azure cloud/Azure services.
- Nice to have Medical/Healthcare Domain Experience.
- Excellent communication skills.
Technical Skills (Must-Have)
- Applied experience with distributed data processing frameworks- Spark and Databricks with Python.
- Proficiency in programming language- Python, PySpark.
- Applied experience with AZURE services ADLS, Delta Lake, Delta Live Tables, Delta Sharing, and Unity Catalog.
- Applied experience with SQL scripting, CRUD operations on time series databases.
- Applied experience with Unit Testing and System Integration Testing using Python framework.
- Applied experience with DevOps to design and deploy CI/CD pipelines using Jenkins.
- AZURE Data Engineering(DP203) or DATABRICKS certification.
Please share your profiles to
Email: hr@abfirst.co.in / info@abfirst.co.in
Email: hr@abfirst.co.in / info@abfirst.co.in
#DataEngineer #DataArchitect #AzureCloud #Databricks #Spark #PythonProgramming #SQL #CI_CD #DataPipelines #AzureServices #DataArchitecture #ETL #HybridWork #BangaloreJobs #TechJobs #ITCareers #DistributedProcessing #DataGovernance #UnitTesting #IoT #HealthcareIT #Jenkins #AgileDevelopment #Pyspark #AzureADLS
Comments
Post a Comment