Gen AI Engineer

Posted 5 months ago

Job Requirement:

Strong Python skills, including libraries like LangChain, Lang Index, PyTorch, SageMaker SDK, psycopg2
Experience with Docker and AWS ECR
Strong AWS experience with the following services: Bedrock, SageMaker, IAM, Glue, S3, Lambda, Code Commit, and Code Pipeline
Experience creating quick apps using Streamlit, NodeJS, or some other app framework
Education and or experience in developing NLP models for text classification, completion, summarization, generation
Experience using embedding models to create vector embeddings and working with vector databases
Understanding of RAG architecture, retrieval optimization, and tradeoffs of splitting methods
Familiarity with benchmarks for model evaluation and methods of determining vector similarity
Experience with scaling ML training workloads using distributed training techniques on GPU and/or developing microservices for AI/ML/GenAI products
Data Preprocessing and Analysis: Work with large-scale datasets, preprocess the data, and perform in-depth analysis to derive meaningful insights, patterns, and trends for AI model training.

Required skills:

Real-world experience fine-tuning models, methods of fine-tuning, and data preprocessing for fine-tuning AWS Solutions Architect and or AWS Machine Learning Specialty certifications.
Python, Vectorized Databases, Large Language Models like GPT-Neo4x, Sage maker Pipelines, Amazon Sage maker, Amazon Jumpstart, Amazon Bedrock for building, training, and deploying LLM models, Generative AI applications, AWS Glue for data transformation and preparation, and AWS Lambda for serverless computing, RAG, LangChain, Streamlit, Bedrock Models, Monitoring and Observability.
Amazon Bedrock in building generative AI-enabled applications. integrate Bedrock with vector databases using RAG (Retrieval-augmented generation), and services like OpenSearch.
Set up the LangChain pipeline including prompt engineering and parsing.
Develop custom pre-trained LLMs based on internal data.
Build data/ETL pipelines for training and fine-tuning LLM.
Perform supervised fine-tuning and instruction-tuning of open-source Generative AI models such as BERT, Bedrock Models, and GPT for a variety of downstream tasks such as Question Answering.
Setup inference endpoints and MLOPs workflow for iterative model performance tuning

Apply Online

A valid email address is required.
A valid phone number is required.