Network Big Data Sr. Engineer _VOIS

Role purpose:

-     The Sr Big Data engineer is a core member of the agile teams delivering data pipelines and capabilities within the organization through building and automating data flows, and provides expert guidance and delivers through self and others to:
1.    Integrate the necessary data from several sources necessary for analysis and for Technology actions
2.    Build applications and products that make use of large volumes of data and generate outputs that allow actions that generate incremental value
3.    Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up delivery in the Big Data Programme, assuring quality, performance and alignment to the Group technology blueprint of components releases in the platform
4.    Support stakeholders and functions in obtaining benefiting business value from the operational data

Key accountabilities and decision ownership

•    Designing and producing high performing stable end-to-end applications to perform complex processing of batch and streaming massive volumes of data in a multi-tenancy big data platform in the cloud (Hadoop on-premises will be consider), and output insights back to business systems according to their requirements.
•    Ingest and automation the necessary data from local and group sources onto GCP platform
•    Accountable to ensure delivery of solution & use case enablement, GCP project & resource enablement, data source ingestion for Networks sources, application production rollouts and code/execution optimisation for big data solutions.
•    Working with key stakeholders such as the Group Big Data/Neuron team, ADS, ACoE, local market IT and Big Data teams to define the strategy for evolving the Big Data capability, including solution architectural decisions aligned with the platform architecture
•    Investigating and driving new technologies adoption to identify where they can bring benefits
•    Ensuring common data architecture, structure and definition, data cleanings and data integrity
•    Support data security & privacy processes

Core competencies, knowledge and experience

•    Bachelor degree in Computer Science, Engineering or a related subject

•    Experience of working on projects that span multi-disciplinary areas of differing size and complexity to meet business expectations and deliver results

•    Confident and able to liaise and influence at all levels within Vodafone and/or relevant customer organizations

•    Able to communicate effectively across organisational, technical and political boundaries, understanding the context

•    Understands business requirements, Drives improvements and process innovation

Must have technical / professional qualifications:

•    Expert level experience with Hadoop ecosystem (Spark, Hive/Impala, HBase, Yarn); Cloudera distribution; experience with similar cloud provider solutions also considered (AWS, GCP, Azure)
•    Strong hands-on software development experience in Python programming; Scala, Java desirable
•    Excellent knowledge in UNIX/Linux administration and bash scripting
•    Expert skill level on Apache Nifi, Apache Kafka, CDAP, Apache Spark, Python programming, bash scripting, Apache Airflow
•    Very Good knowledge with data movement techniques and best practices to handle large volumes of data.
•    Experience with data warehousing architecture and data modelling best practices.
•    Experience with File Systems, server architectures, and distributed systems.
•    Google Cloud certified Professional Data Engineer and/or Cloud Architect
 

#_VOIS#WeMoveTheWorld#MoveWithUs

تاريخ النشر: ١ أغسطس ٢٠٢٤
الناشر: Vodafone jobs
تاريخ النشر: ١ أغسطس ٢٠٢٤
الناشر: Vodafone jobs