AI Workflow-optimization Company
Duration
4 months, Hiring year-round
The Opportunity
Translate business requirements into software, data models, and data warehousing layers on AWS to support APIs, visualization tools, and data analytics.
Implement ETL pipelines to extract and load data into and from databases; automate manual processes.
Evaluate and participate in designing sprints for database/data warehouse solutions as the organization grows.
Support highly performant APIs, visualization tools, and data analytics while ensuring alignment with Jira guidelines.
Provide technical mentorship and cross-training to peers and team members; collaborate with multidisciplinary teams to ensure data acquisition, integrity, availability, and security.
Contribute to the development and implementation of Open Contractor Inc.’s Enterprise Software Management strategy; understand complex business problems and develop reliable solutions.
Leverage experience with data modeling, ontologies, and strong technical communication skills in a collaborative and dynamic environment.
Who You Are
You have completed a Bachelor’s or MS degree in Computer Science, Mathematics, Statistics, Computational Linguistics, Engineering, or a related field preferred. Publications or collaborative work posted in software forums, open-source projects, or machine-learning research are helpful.
Experience contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions.
You have 1+ years of professional hands-on experience leveraging large sets of structured and unstructured data to develop data-driven tactical and strategic analytics and insights using ML, NLP, and computer vision solutions.
Nice to Have
You have 1+ years of hands-on experience developing either front-end chatbot design, backend system design or natural language processing (NLP) modeling, ideally with transformer architectures.
You have 1+ years of experience with implementing information search and retrieval at scale, using a range of solutions from keyword search to semantic search using embeddings.
You have practical experience with generative models, large language models (LLM), transformer neural networks, search, recommendation systems, and graph neural networks.
You have experience with programming languages such as Nextflow, Conda, Java, or Python for developing data pipelines. Demonstrated hands-on experience with Python, Hugging Face, TensorFlow, Keras, PyTorch, or similar statistical tools is required.
You bring operationalization experience in Data Science projects (MLOps) using popular frameworks or platforms like Kubeflow, AWS Sagemaker, Google AI Platform, h2O, etc.
You have advanced experience in working with graph or relational databases, query authoring, and delivering data models and pipelines using cloud services and architectures.
You have experience with cloud computing platforms (e.g., AWS, GCP), containerization technologies (e.g., Docker), and data engineering pipelines (e.g., ETL).
You have experience developing and deploying machine learning solutions using large-scale datasets, including specification design, data collection and labeling, model development, validation, deployment, and ongoing monitoring.
You possess the ability to balance mission-criticality with delivering high-quality, practical solutions, and have demonstrated perseverance in overcoming significant challenges.
Location
Hybrid GTA, Ontario, Canada.
Equipment
Will be provided necessary computer to use if needed.