Job Description: Partner with Product Managers, Solution Architects and Engineering peers to define, analyze, estimate levels of effort to deliver at-scale solutions to meet business requirements. Gain an extensive understanding of key dependencies with external and internal teams to collaborate on cross-functional initiatives. Design, develop and operationalize scripts / services to meet business / functional requirements. Embrace and implement best-in-class DevOps practices with end-to-end “design, build and run” responsibilities with the aim of operating a low-touch Production environment. Responsible for a small to medium sized functional area, or a significant component of a functional area. Develop high-quality code using standards-based solutions, drive adoption through working with other engineers. Maximize system uptime / availability and ensuring functional & performance SLAs are met and establish end-to-end monitoring and alerting for systems. Must appear in office 3 days per week. WFH permissible 2 days per week.
Requirements: Bachelor’s degree or foreign degree equivalent in Computer Science or related field and five (5) years of progressive, post-baccalaureate experience in Software Development and Maintenance or in the job offered or a related role.
Experience and/or education must include:
Automating cluster and job management using Databricks CLI and REST API;
Developing Python scripts for ETL pipelines, data ingestion into Azure Databricks, and using Pandas and PySpark for data transformation;
Utilizing PySpark DataFrames for large-scale data processing, performing complex transformations like filtering, aggregations, and joins;
Writing and optimizing Spark SQL queries for data analysis and reporting, leveraging syntax, built-in functions, indexing, partitioning, and data caching to enhance performance and reduce execution time;
Designing and developing tabular models in Azure Analysis Services, including relationships, hierarchies, and calculated columns for supporting complex analytical queries;
Developing Kafka producers and consumers using Kafka client APIs for real-time data ingestion and processing, ensuring efficient and reliable data flow;
Integrating Jenkins with version control systems such as Git to fetch source code and track changes, facilitating continuous integration;
Integrating Jenkins with Azure cloud services to deploy applications to cloud infrastructure and leverage cloud-native services for improved scalability and reliability; and
Knowledgeable in implementing security best practices with Azure Security Center and enforcing organizational policies using Azure Policy.
Job Location: 4440 Rosewood dr. Pleasanton, CA, 94588. Must appear in office 3 days per week. WFH permissible 2 days per week.
TO APPLY: Please reference job ID R187188 and submit resume online at https://www.gapinc.com/en-us/careers/gap-careers
Job Title: Senior Software Engineer
Salary Range:$206,315/yr - $211,315/yr.
Job ID: R187188
Job Description: Partner with Product Managers, Solution Architects and Engineering peers to define, analyze, estimate levels of effort to deliver at-scale solutions to meet business requirements. Gain an extensive understanding of key dependencies with external and internal teams to collaborate on cross-functional initiatives. Design, develop and operationalize scripts / services to meet business / functional requirements. Embrace and implement best-in-class DevOps practices with end-to-end “design, build and run” responsibilities with the aim of operating a low-touch Production environment. Responsible for a small to medium sized functional area, or a significant component of a functional area. Develop high-quality code using standards-based solutions, drive adoption through working with other engineers. Maximize system uptime / availability and ensuring functional & performance SLAs are met and establish end-to-end monitoring and alerting for systems. Must appear in office 3 days per week. WFH permissible 2 days per week.
Requirements: Bachelor’s degree or foreign degree equivalent in Computer Science or related field and five (5) years of progressive, post-baccalaureate experience in Software Development and Maintenance or in the job offered or a related role.
Experience and/or education must include:
Automating cluster and job management using Databricks CLI and REST API;
Developing Python scripts for ETL pipelines, data ingestion into Azure Databricks, and using Pandas and PySpark for data transformation;
Utilizing PySpark DataFrames for large-scale data processing, performing complex transformations like filtering, aggregations, and joins;
Writing and optimizing Spark SQL queries for data analysis and reporting, leveraging syntax, built-in functions, indexing, partitioning, and data caching to enhance performance and reduce execution time;
Designing and developing tabular models in Azure Analysis Services, including relationships, hierarchies, and calculated columns for supporting complex analytical queries;
Developing Kafka producers and consumers using Kafka client APIs for real-time data ingestion and processing, ensuring efficient and reliable data flow;
Integrating Jenkins with version control systems such as Git to fetch source code and track changes, facilitating continuous integration;
Integrating Jenkins with Azure cloud services to deploy applications to cloud infrastructure and leverage cloud-native services for improved scalability and reliability; and
Knowledgeable in implementing security best practices with Azure Security Center and enforcing organizational policies using Azure Policy.
Job Location: 4440 Rosewood dr. Pleasanton, CA, 94588. Must appear in office 3 days per week. WFH permissible 2 days per week.
TO APPLY: Please reference job ID R187188 and submit resume online at https://www.gapinc.com/en-us/careers/gap-careers