W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9uagugqnjpzgdlieluic9wbmcvymfubmvylwrlzmf1bhqucg5nil1d

Job

Data Engineer

  • Location

    City of London, London

  • Sector:

    Data Migration / ETL Developer

  • Job type:

    Permanent

  • Salary:

    £50000.00 - £60000.00 per annum

  • Contact:

    Dan Holden

  • Contact email:

    Dan.holden@thebridgeit.com

  • Job ref:

    1265DH_1579279305

  • Published:

    8 months ago

  • Expiry date:

    2020-02-16

  • Consultant:

    #

Data Engineer - AWS

Salary - £50,000 - £60,000 plus benefits

Location - Central London

The Role

This is a new and exciting opportunity for Data Engineer to work within the Sport industry in Central London. To begin with you'll be taking ownership of their data warehouse migration where they're moving from an Oracle cluster to the cloud-based Snowflake data warehouse. You to be need someone who can (among other things) unpick the current PL/SQL ETL processes on the Oracle system and re-implement in python using libraries such as pandas and sqlAlchemy. The process is managed in an AWS framework using several services (you don't to know every service inside-out but a bit of experience using cloud services in AWS and understanding the principles is valuable).

There is great potential to learn and grow within the company as you'll be reporting into the Head of Data Engineering and able to have exposure to a variety of the latest technologies.

Key Skills and Experience -

  • Experience of Python with use of libraries such as pandas, SQLAlchemy, boto etc
  • Understanding of cloud service fundamentals, including how usage can affect costs. Preference for AWS, then GCP, but any platform is fine in the understanding of core concepts is there
  • Understanding of PL/SQL to the level required to convert it to the equivalent python code
  • Demonstrate a professional understanding of coding practices including code re-use, portability, scalability, object orientation etc.

Desired:

  • Familiarity with data warehousing technology such as AWS Redshift, Snowflake
  • Java usage to the level required to understand, code and analyse Hadoop MapReduce projects
  • Experience of Linux and Windows scripting languages appropriate to the subject area (eg bash, powershell)
  • Understanding of modern data processing paradigms including streaming, event-driven processing, serverless computing, algorithms such as MapReduce, frameworks such as Hadoop, Spark, Storm.
  • Problem-solving ability applicable to code debugging, database table design, high volume data retention
  • Experience with data visualisation products e.g. Tableau, QuickSight
  • Interest in or experience of Machine Learning/Artificial Intelligence techniques and application