Experience in software engineering with experience in big data real-time data processing. Eight – Ten years of experience in building enterprise level software solutions. Minimum Two years of experience in architecting AWS cloud-based software solutions

Big Data Architect: Enterprise Projects : Strategic Position:

Location: Sydney
Salary: $130k- $140K
Job Posted: Aug 2019

This is an exclusive strategic role and will be working closely with executive management and senior business stakeholders. This role offers an excellent salary package along with an excellent investment into the chosen candidates long-term career.

  • Experience in software engineering with experience in big data real-time data processing.
  • 8-10 years of experience in building enterprise level software solutions
  • Minimum 2 years of experience in architecting AWS cloud-based software solutions
  • Experience in APIs based development using Java or Python.
  • Experience in architecting & building the secure, reliable and high-performance data pipeline using Spark
  • Experience in architecting solutions at scale to empower the business and support a wide variety of use cases, from experimental work to mission-critical production operations.
  • Experience in real-time data processing
  • Experience working with both Structured and Unstructured data including complex JSONs
  • Experience in the database systems such as AWS Redshift, BigQuery, SQL Server or Oracle
  • Experience with multi-threading and asynchronous event-driven programming
  • Experience with high volume, high availability distributed system
  • Experience in coming up with viable solutions to tough engineering problems

Responsibilities:

  • Managing backend data ingestion/integration pipelines development lifecycle including architecture, design, development, testing, and deployment.
  • Explore and discover new data sources and quickly familiarise with the available APIs or other data acquisition methods like web-scraping to ingest data
  • Build quick proof of concepts of new data sources to showcase data capabilities and help the analytics team identify key metrics and dimensions
  • Design, develop and maintain data ingestion & integration pipelines from various sources which may include contacting primary or third party data providers to resolve questions, inconsistencies, and/or obtain missing data
  • Design, implement and manage a near real-time ingestion & integration pipelines
  • Analyse data to identify outliers, missing, incomplete, and/or invalid data; Ensure accuracy of all data from source to final deliverable by creating automated quality checks
  • Evangelise an extremely high standard of code quality, system reliability, and performance.
Menu