As VP of Data Engineering at Company, you are a hands-on individual contributor tasked with assisting and/or leading data engineering initiatives delivering next-gen architecture, security, and enterprise data solutions. Working with cross-functional peers and business partners, you will provide a strong influence on architecture design, modernization of existing data pipelines, ongoing roadmap development and prioritization of the enterprise data strategy and business execution.
Duties and Responsibilities:
As an individual contributor, you will be working with peers, partners, cross-functional teams and vendors to:
Develop and execute data security architecture framework across the enterprise
- Evaluate data security threats, risks and vulnerabilities to the Enterprise Data organization.
- Analyze relevant policies, standards and procedures against Enterprise Data systems (along with regulatory drivers) to ensure control mapping and drive compliance.
- Design, architect and implement world-class physical data security, data privacy, classification, authentication, access controls, secure configurations and drive inventory of assets.
- Ensure that information security policies, multi-year strategies, standards, procedures, and best practices are developed and communicated with IT Leadership.
- Define and research information security standards; vulnerability analyses and risk assessments; reviewing architecture platforms, applications and integration issues.
- Participates in Project Management activities to manage IT Security programs and initiatives.
Assist with our journey to a hybrid multi-cloud next-generation data platform
- Lead the cloud data lake architecture design, integration and production deployment initiatives.
- Lead the design and implementation of secure data integration processes for both upstream and downstream data pipelines.
- Assist with building and deploying streaming and batch data pipelines across hybrid multi-cloud architecture, capable of processing and storing petabytes of data quickly and reliably.
- Assist with the re-architecture of a variety of data ingestion and transformation pipelines ranging from marketing, web analytics, and consumer device metrics.
Data Engineering for Analytics and Operationalization
- Build capabilities to run machine learning algorithms at scale.
- Build and maintain dimensional data warehouses, data marts, operational data stores, semantic models and ontologies in support of business intelligence and advanced analytics tools in a hybrid multi-cloud architecture.
- Assist with graph-based metadata and master data management repositories.
- Drive and maintain a culture of quality, innovation and experimentation.
- Establish a strong collaborative culture with peers and other functions within Company. Promote a culture of success, pride, performance, discipline, innovation and creativity.
- 10+ years working with ETL/ELT, Data Warehousing, Data Modeling and Big Data Architecture (at least 7+ years working with distributed systems and 4+ years with cloud).
- 10+ years developing, maintaining and evolving data security programs.
Requirements and General Skills:
- Good public speaking and presentation skills.
- Bias for continuous improvement and a demonstrated ability to embrace and grow from feedback.
- Able to communicate clearly and effectively, especially to a less technical audience.
- Excellent written and verbal communication skills.
- Ability to work independently and in a team environment.
- Ability to pay attention to details and be organized.
- Ability to project professionalism over the phone and in person.
- Ability to handle multiple tasks in a fast-paced environment.
- Commitment to "internal client" and customer service principles.
- Willingness to take initiative and to follow through on projects.
- Spelling, grammar, proofreading and editing skills.
- Creative writing ability.
- Excellent time management skills, with the ability to prioritize and multi-task, and work under shifting deadlines in a fast-paced environment.
- Must have legal right to work in the U.S.
- Expertise with protecting personally identifiable information (PII) while maintaining quality, transparency and necessary access to data.
- Expertise with hybrid multi-cloud data integration and orchestration.
- Experience with data processing and manipulation in Java, Python, or Scala.
- Experience with data lake architecture on distributed file systems and/or object storage.
- Experience optimizing data pipelines and data processing in a decoupled storage and compute paradigm.
- Applied production experience data processing, warehousing, and streaming technologies such as Hadoop, Spark, Hive, Luigi, Airflow, HBase, Impala, Samza, Flink, Kafka, Spark Streaming, or managed cloud services in AWS, GCP and/or Azure.
- Expert in at least one SQL language such as T-SQL or PL/SQL.
- Experience developing and managing data warehouses on a terabyte or petabyte scale.
- Strong experience in massively parallel processing & columnar databases.
- Deep understanding of advanced data warehousing concepts and track record of applying these concepts on the job.