Mente

role_descriptions.htm

Cool Job Descriptions

Data Infrastructure Engineer at Applied Intuition

We are looking for infrastructure engineers with expertise in data pipelines to join the Data & ML infra group. This role will work across the entire data lifecycle (collection, ingestion, storage, querying, retrieval) and work directly with product teams across the company to design and develop both external and internal products. Handling massive volumes of data for Applied Intuition's platform needs is a critical area and we are looking for someone who can be hands-on in supporting our data products and services across the company. At Applied Intuition, we encourage all engineers to take ownership over technical and product decisions, closely interact with external and internal users to collect feedback, and contribute to a thoughtful, dynamic team culture.

We're looking for someone who has:

Nice to have:

Data Infrastructure Engineer at Open AI

https://openai.com/careers/data-infrastructure-engineer

The Applied Data Platform team designs, builds, and operates the foundational data infrastructure that enables products and teams at OpenAI.

You are comfortable with work such as scaling Kubernetes services, debugging Kafka consumer lag, diagnosing distributed kv store failures, designing a system to retrieve image vectors with low latency.

You are well versed with infrastructure tooling such as Terraform, worked with Kubernetes, and have the SRE skill sets.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

Some of the technologies you’ll be working with include Apache Spark, Python, Terraform, Kafka, Azure EventHub, Vector DBs.

Machine Learning Engineer

E.g. https://jobs.lever.co/imbue/9411e2ec-502a-403f-a39a-0d44c676b6afi

Example projects • Implement a self-supervised network using contrastive and reconstruction losses. • Create a library on top of PyTorch to enable efficient network architecture search. • Open source internal tools. • Implement networks from newly published papers. • Work on tools for simple distributed parallel training of deep neural networks. • Develop more realistic simulations for training our agents. • Design automated methods and tools to prevent common issues with neural network training (e.g. overfitting, vanishing gradients, dead ReLUs, etc). • Create visualizations to help us deeply understand what our networks learn and why.

You are • Very comfortable writing Python. • Familiar with PyTorch and training deep neural networks. • Excited to work on open source code. • Passionate about engineering best practices. • Self-directed and independent. • Excellent at getting things done.