Portofino logo

Data Engineer

Portofino
Full-time
On-site
Amsterdam, Netherlands

Company Description

Portofino Technologies is a start-up building high-frequency trading (HFT) grade technology for digital assets. 

Today, Portofino provides liquidity on the largest centralised and decentralised cryptocurrency exchanges and provides services to institutions and Web3 projects that require digital asset liquidity. 

Since our establishment in 2021, we have been building market-leading HFT technology to deploy our liquidity provisioning algorithms. Our competitive advantage is our superior proprietary technology that leverages advanced machine learning and stochastic control techniques to provide our clients and partners with the best pricing in the market. 

We are backed by some of the largest VCs in the world, Valar Ventures, Coatue, and Global Founders Capital. Our vision is to scale our technology across the full crypto infrastructure value chain.

Job Description

 

Scope of the role

The Data Engineer we are looking for will architect and build our data platforms which drive how we source, enrich, and store data that integrates into the investment process. You will own the entire data pipeline starting with how we ingest data from the outside world, transforming that information into actionable insights, and ultimately designing the interfaces and APIs that the rest of the team will use to monetize ideas. 

This role will be based in Amsterdam, Netherlands.

 

Your Responsibilities

  • Build data sets, data quality metrics, and automation tools to enhance our research and system development
  • Develop solutions that enable our quantitative researchers and traders  to efficiently extract insights from data, including owning the ingestion (web scrapes, S3/FTP sync, sensor collection), transformations (Spark, SQL, Kafka, Python/C++/Java), and interface (API, schema design, events)
  • Build tools and automation capabilities for data pipelines that improve the efficiency, quality and resiliency of our data platform

Qualifications

 

Your profile

  • Experience developing scalable pipelines and model implementations suitable for high volume data sets, e.g. persisting & cleaning large datasets; handling multi regional replication; handling compression & building a scalable pipeline; handling accessibility via an API to data users
  • Strong statistical analysis skills
  • Demonstrated ability to troubleshoot and conduct root-cause analysis
  • User-focused: driven to deliver a usable product to users, rather than by technology itself
  • Enthusiasm for working with data, especially large sets of data. Strong aesthetics for cleanliness and correctness in data

Required technical competencies (must have)

  • Strong programming skills in Python is a must
  • Experience with common Python based data-engineering toolkits
  • Proficiency with RDBMS, NoSQL, distributed compute platforms such as Spark, Dask or Hadoop
  • Experience with the following systems: Apache Airflow, AWS, Jupyter, Kafka, Docker, Kubernetes, or Snowflake
  • Experience with running code in containerised environments is a plus, especially Docker and Kubernetes

Additional Information

If interested apply directly or contact us at [email protected]

We look forward to your application!

 

Disclaimer for recruitment agencies: Portofino Technologies does not accept unsolicited CVs or applications from recruiters or employment agencies in response to our career portal or our social media posts. Portofino Technologies will not agree to payment of any compensation or referral fee relating to these applications. Portofino Technologies reserves the right to hire these candidate(s) without any financial obligation towards the recruiter or agency. Any unsolicited CVs, including those submitted to hiring managers or any other Portofino employee, will be considered as property of Portofino Technologies.