top of page

Machine Learning Engineer

Project Overview:

We are seeking a highly skilled and versatile Machine Learning and Data Engineer to join our team. This role involves developing sophisticated image-based solutions for plant phenotyping and pulling, stitching, and wrangling data from various sources for advanced analytics. The successful candidate will work on a cross-functional team, collaborating closely with crop scientists, plant pathologists, cloud engineers, and data scientists to enable access to relevant data and deploy machine-learning models and solutions.

 

Responsibilities:

  • Develop and implement image-based solutions for quantifying various traits and features of plants.

  • Pull data from first and third-party sources, prepare it for analysis by filtering, tagging, joining, parsing, and normalizing datasets.

  • Collaborate with team members to research data sources, design and deploy data loaders, and ensure the availability of data for analytical activities.

  • Provide technical expertise in the design and implementation of data management and architecture solutions.

  • Develop, evolve, and implement concepts of data ops in partnership with cloud and ML engineers.

  • Architect, develop, deliver, and support ML products and solutions, ensuring successful deployment into production.

  • Automate model training, testing, and deployment processes.

  • Design, develop, and maintain scalable data pipelines and ETL processes.

 

Requirements:

  • Master’s degree or higher in Data Science, Computer Science, Computer Engineering, Electrical Engineering, Robotics, Physics, Mathematics, or related fields.

  • Deep understanding and experience with Image Theory, Deep Learning, Machine Learning, Python Programming, TensorFlow, or PyTorch.

  • Strong experience in designing and implementing large-scale data processing, analysis, and exploration solutions.

  • Proficiency in R, Python, advanced SQL skills, and a good understanding of data management principles.

  • Experience with cloud technologies (Azure or GCP preferred) and deploying CI/CD pipelines.

  • Knowledge of Git, PySpark, Data Bricks, and database management (SQL/PLSQL/Snowflake preferred).

  • Excellent communication skills and the ability to work independently and collaborate across teams.

  • 2+ years of industry or academia experience in image processing, computer vision, or related fields.

 

Work Details:

  • Location: Remote, candidates must work in the PST time zone.

  • Work Hours: 40 hours per week.

  • We sponsor US work visa after 1 year of service

Let’s Work Together

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
Upload Resume
Upload supported file (Max 15MB)

Thanks for submitting!

bottom of page