Cloud Data Engineer
AquaSeca
AquaSeca is bridging the gap between machine learning and sustainability. We are an early- stage team with a novel, patented, proven approach to monitoring water consumption and the health of highly complex plumbing systems. Our focus is on commercial, industrial, and multi- unit residential applications, where we are introducing our first product. We are excited to be taking first strides in the interests of water efficiency and reliability, and look forward to a few motivated, enthusiastic people joining our foundational team. You will be joining a unique early- stage startup with commensurate risks and rewards.
This can be a remote position, with travel required for in-person meetups several times a year. We encourage all potentially qualified candidates to apply. Our company is headquartered in San Jose, CA with operations in Seattle, WA.
Job Overview
We are seeking an individual with dev-ops and cloud architecture experience who can build out our technological foundation while anticipating future needs. As a company with integrated hardware/IOT and cloud services, much of our platform is proprietary and requires custom tooling. This position will be focused on guiding our ecosystem with a focus on architecture and operations over coding and product development.
Responsibilities
- Architecting and implementing a battleproof data pipeline (and monitoring system) for digesting binary serialized data (CBOR, Protobuf) to assist our data science team
- Building tooling to assist our engineering team with automation, deployment, and testing while researching and defining benchmarks for security, testing, performance, and code quality
- Implementing data migration systems and maintenance plans (for both hardware and software updates)
- In collaboration with our engineering team, designing a system-health monitoring framework for both our cloud services and hardware fleet
Required Qualifications
- At least 3 years performing systems architecture/design: Experience in cloud services and a strong knowledge of best practices for service-oriented architecture (GCP/AWS).
- CI/CD: Experience with a modern CI/CD system (we use GitLab) and leveraging it to its full potential.
- Data: Experience with ELT/ETL and data coercion/manipulation on a modern pipeline.
Modern SQL and NoSQL experience (bonus points for PostgreSQL and Firestore).
- Practical experience with cloud services portability architecture: K8s, Docker.
- Security: Knowledge of modern regulations and requirements such as GDPR.
- Strong communication skills: You’ll need to be able to assess the needs of data science, hardware engineering, back-end engineering, front-end engineering, and product development staff.
- Performance-, scalability-, and reliability-focused mindset: We are scaling quickly. You will be provisioning services, streamlining existing systems, and finding the right technology to help us continue to build an innovative product.
Additional Ideal Capabilities
- Experience with implementing Big Data tools (Hadoop/Spark/Kafka) .
- Knowledge of modern data pipelines: Beam/Spark/DataFlow.
- Python/Go proficiency, particularly with systems/test automation.
Job Type: Full-time
Pay: $100,000.00 - $165,000.00 per year
Benefits:
- 401(k)
- Dental insurance
- Health insurance
- Paid time off
- Vision insurance
Full Time