Principal Engineer, Data Analytics Engineering
- Full-time
- Job Type (exemption status): Exempt position - Please see related compensation & benefits details below
- Business Function: Data Analytics Engineering
- Work Location: Bangalore Cosmos Office--LOC_WDT_Bangalore Cosmos Office
Company Description
At Western Digital, our vision is to power global innovation and push the boundaries of technology to make what you thought was once impossible, possible. At our core, Western Digital is a company of problem solvers. People achieve extraordinary things given the right technology. For decades, weve been doing just that. Our technology helped people put a man on the moon.
We are a key partner to some of the largest and highest growth organizations in the world. From energizing the most competitive gaming platforms, to enabling systems to make cities safer and cars smarter and more connected, to powering the data centers behind many of the worlds biggest companies and public cloud, Western Digital is fueling a brighter, smarter future.
Binge-watch any shows, use social media or shop online lately Youll find Western Digital supporting the storage infrastructure behind many of these platforms. And, that flash memory card that captures and preserves your most. precious moments Thats us, too.
We offer an expansive portfolio of technologies, storage devices and platforms for business and consumers alike. Our data-centric solutions are comprised of the Western Digital, G-Technology, SanDisk and WD brands.
Todays exceptional challenges require your unique skills. Its You & Western Digital. Together, were the next BIG thing in data.
ABOUT ADVANCED ANALYTICS OFFICE (AAO) and BIG DATA PLATFORM (BDP) AAO is missioned with accelerating Analytics solutions at scale across the Enterprise to rapidly capture business value. These solutions target key business metrics, such as, reducing manufacturing cost, improve capital efficiency, reduce time-to market to develop new products, improve operational efficiency, and improve customer experience. These solutions are built using cutting edge Industrial 4.0 technologies and are delivered through a platform approach to enable rapid scaling. The solutions span AI/ML for improving manufacturing yield, quality, equipment uptime, and adaptive testing, Operations Research for capacity and scheduling optimization, Digital Twin for inventory and logistics optimization, and Product Telematics for managing customer fleet management solutions.
Big data platform, BDP team provides self-service data and application platforms enabling rapid scaling of services to make ever increasing business impact. You will have the opportunity to partner in making remarkable things happen across WDTs more than dozen factories across the globe, global product development teams, customer solutions, and supporting operations like Finance, supply chain, Procurement, Sales etc
Job Description
We are seeking a passionate candidate dedicated to building robust data pipelines and handling large-scale data processing. The ideal candidate will thrive in a dynamic environment and demonstrate a commitment to optimizing and maintaining efficient data workflows. The ideal candidate will have hands-on experience with Python, MariaDB, SQL, Linux, Docker, Airflow administration, and CI/CD pipeline creation and maintenance. The application is built using Python Dash, and the role will involve application deployment, server administration, and ensuring the smooth operation and upgrading of the application.
Key Responsibilities:
- Minimum of 9+ years of experience in developing data pipelines using Spark.
- Ability to design, develop, and optimize Apache Spark applications for large-scale data processing.
- Ability to implement efficient data transformation and manipulation logic using Spark RDDs and Data Frames.
- Manage server administration tasks, including monitoring, troubleshooting, and optimizing performance. Administer and manage databases (MariaDB) to ensure data integrity and availability.
- Ability to design, implement, and maintain Apache Kafka pipelines for real-time data streaming and event-driven architectures.
- Development and deep technical skill in Python, PySpark, Scala and SQL/Procedure.
- Working knowledge and understanding on Unix/Linux operating system like awk, ssh, crontab, etc.,
- Ability to write transact SQL, develop and debug stored procedures and user defined functions in python.
- Working experience on Postgres and/or Redshift/Snowflake database is required.
- Exposure to CI/CD tools like bit bucket, Jenkins, ansible, docker, Kubernetes etc. is preferred.
- Ability to understand relational database systems and its concepts.
- Ability to handle large table/dataset of 2+TB in a columnar database environment.
- Ability to integrate data pipelines with Splunk/Grafana for real-time monitoring, analysis, and Power BI visualization.
- Ability to create and schedule the Airflow Jobs.
Qualifications
- Minimum of a bachelor’s degree in computer science or engineering. Master’s degree preferred.
- AWS developer certification will be preferred.
- Any certification on SDLC (Software Development Life Cycle) methodology, integrated source control system, continuous development and continuous integration will be preferred.
Additional Information
Because Western Digital thrives on the power of diversity and is committed to an inclusive environment where every individual can thrive through a sense of belonging, respect, and contribution, we are committed to giving every qualified applicant and employee an equal opportunity. Western Digital does not discriminate against any applicant or employee based on their protected class status and complies with all federal and state laws against discrimination, harassment, and retaliation, as well as the laws and regulations set forth in the "Equal Employment Opportunity is the Law" poster.