Good-day,
We have immediate opportunity for Data Engineer (Python, Spark/Scala).
Job Role: Data Engineer (Python, Spark/Scala)
Job Location: Bangalore
Experience-5 to 10 years
Notice Period: Immediate joiner only.
About Company:
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 13,950+, and has 55 offices in 20 countrieswithin key global markets. For more information on the company, please visit our website or LinkedIn community.
Diversity, Equity, and Inclusion
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and an affirmative-action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Key Responsibilities:
- ETL Development: Design, build, and optimize complex ETL/ELT pipelines in large enterprise environments to ensure efficient data flow and processing.
- Big Data Technologies: Utilize Apache Spark (using Scala or PySpark) to process and analyze large datasets, ensuring high performance and scalability.
- Programming: Leverage Python for data manipulation, scripting, and automation tasks. Experience with Scala is highly desirable for enhancing data processing capabilities.
- Data Processing & Wrangling: Cleanse, transform, and aggregate large datasets from various sources, ensuring data quality and integrity.
- Database Skills: Demonstrate solid understanding and experience with SQL for querying and optimization, along with familiarity with NoSQL concepts and databases.
- Agile/Scrum Experience: Collaborate effectively within an Agile/Scrum team, engaging directly with clients or business stakeholders to gather requirements and deliver solutions.
- Communication & Collaboration: Exhibit excellent verbal and written communication skills, capable of explaining complex technical concepts to diverse audiences and working effectively in a team environment.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Proven experience as a Data Engineer or in a similar role, with a strong focus on ETL development and big data technologies.
- Proficiency in Apache Spark (Scala or PySpark) and Python for data manipulation and automation.
- Solid experience with SQL, including querying and optimization techniques.
- Familiarity with NoSQL databases and concepts.
- Experience working in Agile/Scrum teams, with a collaborative approach to problem-solving.
- Strong analytical skills and attention to detail.
Preferred Qualifications:
- Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) for data processing and storage.
- Knowledge of data warehousing concepts and tools.
- Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus.