As the leader in Data Creation, Snowplow empowers more than 10,000 organizations, including Strava, Autotrader, and Flickr to purposefully create behavioral data to unlock transformative AI and advanced analytics directly from their warehouse, lake or in a real-time stream.
Snowplow was founded with the belief that data teams should spend their time innovating, not extracting and wrangling behavioral data from CDP’s or analytics platforms.
Following our $40 million in Series B funding led by global venture capital firm, NEA, whose prior investments include Databricks, Cloudflare, and DataRobot, we are on the lookout for more creative and innovative individuals to help us shape our next chapter.
At Snowplow we have been on a mission to democratize the ability to deploy and manage cloud infrastructure at scale. Internally the tooling we have built is what lets us run our Private SaaS distribution model. However we are still missing a piece of the puzzle in how we bring this functionality further up the chain and empower other teams at Snowplow to manage “Snowplow” shaped resources as well as eventually customers themselves.
We’re looking for an experienced Software Engineer to join our Infrastructure Services team in building out the API and workflow layer on top of our systems for managing infrastructure to allow reliable and secure self-service in the deployment and management of Snowplow Infrastructure resources.
We see this as a key building block to removing human-in-the-loop toil in the team and allowing the ability to develop custom data pipeline topologies.
You will work closely with the SRE team and the systems they manage to build out this interface layer and with dependent teams to surface the functionality they need to develop the products they are working on.
You will be joining a wider team of 8+ remote SREs who work closely with our product, support and customer teams. There is a huge opportunity to learn more about all aspects of infrastructure, engineering and data, from code to customers.
What you’ll be doing:
• Building APIs and tooling for developers to interface with Infrastructure Systems. You have used AWS, GCP or Azure before - we want to build the higher-order Snowplow abstraction on top with the same style of control.
• Building out IaC tooling to work with these APIs. Developing Terraform providers to interface with the API to allow for simple management of our new resource structures.
• Working with diverse technologies. You’ll get the opportunity to work across a range of platforms (AWS, GCP and soon also Azure) and languages (Currently mainly Go, Python and Bash, however the Snowplow estate is very diverse with many opportunities). As well as best-in-class tooling for managing infrastructure in the form of the HashiStack (Terraform, Nomad, Consul and Vault).
• Empowered. Working in a productive, empowered team. Everyone says this, but we’re really doing it. Come talk to us about how.
We’d love to hear from you if:
• Go Programming Experience. You have significant experience working with Go in multiple production use cases. Experience with other languages is also beneficial but not essential.
• You care about developer experiences. You like the idea of designing and building software which helps other developers to achieve their goals.
• Leadership. You have experience supporting the development of peers.
• You enjoy working remotely. Our remote team depends on expert collaborators to work effectively. You’ll be a great communicator and enjoy working closely with the team.
• Experience working with data stacks. Previous experience in data is a plus, but most importantly you have an interest in data and how it empowers companies to make better decisions.
• Self motivated. You don’t wait to be told what to do. You can understand a problem, drive toward a solution and recognise when you need support or more direction.
• Pragmatic. We can’t do everything today. You’ll be pragmatic in your approach to software delivery and balance our speed of learning with our commitment to providing a reliable and trusted service to customers.
What you get in return for being awesome:
• A competitive package, including share options
• 25 days of holiday a year (plus public holidays)
• Freedom to work from wherever suits you best
• Cycle to work scheme if UK-based
• Fantastic company Away Weeks in a different city each year
• Mental health support including therapy sessions
• Work alongside a supportive and talented team with the opportunity to work on cutting edge technology and challenging problems
• Grow and develop in a fast-moving, collaborative organisation
• MacBook and home office equipment
• Enjoy fun events organised by our Cultural Work Committee
• Convenient location in central London for those who want to work there / when you come to visit
• Continuous supply of Pact coffee and healthy snacks in the office when you’re here!
Let us know
Help us maintain the quality of jobs posted on RemoteTechJobs and let us know if:
Location: 7000 Target Pkwy N, Brooklyn Park, Minnesota, United States, 55445About us:As a Fortune 50 company with more than 350,000 team members worldwide, Target is an iconic brand and one of America's leading retailers.Working at Target means the opportunity.
Project DescriptionAs a member of the Service Platform team you will design, build and support internally developed cloud agnostic PaaS, API Gateway and Service catalog/mesh offerings for the Nike developers ecosystem based on modern technology such.
If you haven't heard of usStaking Rewards is on a mission. Agency is the strategy, passive income from Crypto is one of our tactics in this strategy. We're building tools, platforms and products to help you, ourselves and others to achieve an impactful.
Company DescriptionJobs for Humanity is dedicated to building an inclusive and just employment ecosystem. Therefore, we have dedicated this job posting to individuals coming from the following communities: Refugee, Neurodivergent, Single Parent, Blind.
Jetstack is the creator of a highly popular open-source project called cert-manager. Cert-manager is downloaded more than 2 million times a day. It is widely used to provision certificates to Kubernetes clusters, and in production with companies like.