Headquarters: San Francisco, CA
URL: https://www.tesorio.com/
The Challenge
We’re now looking for a Data Engineer or Senior Backend Software Engineer who can lead the charge in developing the systems that will support large-scale ML deployments. Imagine that you have cutting-edge machine learning models, but you now have to deploy them behind a bank’s four walls on a system that could be used by over 30,000 companies simultaneously in a database with billions of records. You must have 2+ years writing REST APIs in Python, be good at SQL, have used Docker to package your code, and built ETL pipelines on systems like Argo or Airflow.
Overview
Our mission is to build financial management technologies that enable the world’s most important companies to grow more quickly in a sustainable way that’s good for people, the planet, and business.
When companies have strong cash flow performance they can shift from short-term acrobatics to long-term growth and innovation. These are the teams that change the world by being freed to optimize for all of their stakeholders, including their employees, business partners, and environment.
The Opportunity
Cash flow is the toughest financial statement to understand but it’s fundamental to funding your own growth. We build the most intuitive and actionable tools for companies to optimize cash flow performance. Our platform analyzes billions of dollars of B2B transactions each year, users spend 70% of their workday in Tesorio, and we save finance teams thousands of hours. As a result, they can invest more confidently and anticipate their capital needs further in advance.
We’re growing quickly and working with the world’s best companies and the largest bank in the US. We recently raised a $10MM Series A led by Madrona Venture Group and are backed by top investors including First Round Capital, Y Combinator, and Floodgate. We’re also backed by tenured finance execs, including the former CFOs of Oracle and NetSuite.
We’re now looking for a Data Engineer or Senior Backend Software Engineer who can lead the charge in developing the systems that will support large-scale ML deployments. This project you are joining is fast-paced and for a large bank, so you must be experienced—you will not have time to simultaneously onboard, gather business context, and deliver on the tight timeline. To give you a sense for the project, imagine that you have cutting-edge machine learning models, but you now have to deploy them behind a bank’s four walls on a system that could be used by over 30,000 companies simultaneously in a database with billions of records.
The ideal candidate for this role is NOT someone that can build a great model, rather you are good at building the systems around the model, packaging them, and deploying them robustly. You must have 6+ years of experience as an engineer with 2+ years writing REST APIs in Python, be good at SQL, have used Docker to package your code, and built ETL pipelines on systems like Argo or Airflow.
Our team is based in the San Francisco Bay Area, and we have a diverse, distributed workforce in five countries across the Americas. We don’t believe that people need to sacrifice being close to their families and where they’d prefer to live in order to do their best work.
Responsibilities
- Extract data from 3rd-party databases and transform it into useable outputs for the Product and Data Science teams
- Work with Software Engineers and Machine Learning Engineers, call out risks, and performance bottlenecks
- Ensure data pipelines are robust, fast, secure, and scalable
- Use the right tool for the job to make data available, whether that is on the database or in code
- Own data quality and pipeline uptime. Plan for failure
Skills
- Experience scaling, securing, snapshotting, optimizing schemas and performance tuning relational and document data stores
- Experience building ETL pipelines using workflow management tools like Argo, Airflow or Kubeflow on Kubernetes
- Experience implementing data layer APIs using ORMs such as SQLAlchemy and schema change management using tools like Alembic
- Fluent in Python and experience containerizing their code for deployment.
- Experience following security best practices like encryption at rest and flight, data governance and cataloging
- Understanding the importance of picking the right data store for the job. (columnar, logging, OLAP, OLTP, etc.,)
- Nice to have: Exposure to machine learning
- Nice to have: Experience with on-prem deployments
To apply: https://jobs.lever.co/tesorio/dd9a46b8-445b-4be4-ac6a-797e0dc3841d
source https://weworkremotely.com/remote-jobs/tesorio-data-engineer-etl-focused-docker-req-d
No comments:
Post a Comment