Nisarg Patel Resume

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Nisarg Patel

Phone: +1 602-516-3792 | Email: [email protected] | LinkedIn: linkedin.com/in/nisarg-p-patel/ | Google Scholar: Citation 245+

Innovative Data Scientist/Machine Learning Engineer with 3+ years in AI/NLP research, specializing in LLM evaluation, dataset development,
and logical reasoning enhancement. Proven expertise in boosting AI model performance and advancing NLP research in healthcare and finance.
Published researcher with a Master’s, recognized for contributions to deep learning frameworks and cloud technologies in ACL, IEEE, and MDPI.

EDUCATION:

Master of Science in Computer Science (GPA: 4.0/4.0) – Arizona State University | Tempe, Arizona Aug 2022 - May 2024
Bachelor of Technology in Computer Science (GPA: 8.12/10.00) – Nirma University | Ahmedabad, India Jul 2018 - May 2022

TECHNICAL SKILLS:

• Programming and Data Eng.: Python, C, SQL, Java, C++, Apache Spark, ETL Processes, MySQL, PostgreSQL, MongoDB
• Frameworks/Libraries: TensorFlow, Keras, PyTorch, Hugging Face, nltk, Spacy, Scikit-Learn, OpenCV, Matplotlib, Numpy, Pandas
• Data Tools: Seaborn, Data Analysis, Data Visualization, Reporting, Statistical Analysis, FAISS, Data Wrangling, Data Integration
• Tool: Git, Postman, MongoDB, Atlas, DynamoDB, Docker, Kubernetes, AWS, GCP, Streamlit, FastAPI, LangChain, Hadoop, CI/CD
• Concept: LLM, Prompting, Attention Mechanism, Deep Learning, Machine Learning, Data Science, Artificial Intelligence, Big Data
Analysis, Software Development, Design Pattern, Code Reviews, A/B Testing, Feature Engineering, Parameter tuning, Vector DB

PROFESSIONAL EXPERIENCE:

Cogint Lab (Arizona State University), Tempe, USA | Research Assistant Aug 2022 – Present
• Spearheaded the development of LogicBench, a dataset for natural language question-answering, demonstrating leadership in enhancing
logical reasoning proficiency in LLMs through validated performance improvements across diverse datasets.
• Validated enhanced logical reasoning proficiency with an average 28% performance improvement across LogicNLI, FOLIO, LogiQA,
and ReClor datasets, affirming the effectiveness of LLMs trained on LogicBench.
• Analysed detailed evaluations of LLMs on 25 diverse reasoning patterns in propositional, first-order, and non-monotonic (NM) logic.
• Evaluated GPT-family models, Gemini, and Llama-2, revealing difficulties in complex reasoning tasks with 55% accuracy.
• Proposed and developed Multi-LogiEval, a comprehensive evaluation dataset encompassing multi-step logical reasoning with over 30
inference rules and more than 60 combinations at varying depths, covering propositional, first-order, and non-monotonic logic.
• Utilized a range of LLMs, including GPT-4, ChatGPT, Gemini, Llama-2, Gemini-Pro, Yi, Orca, and Mistral, and applied chain-of-
thought, few-shot, zero-shot, self-discover, and step-back prompting techniques to analyse their logical reasoning performance.
• Identified significant performance drops in large language models as the reasoning steps and depth increased, with average accuracy
decreasing from approximately 68% at depth-1 to 43% at depth-5. Performed in-depth analysis of reasoning chains generated by
LLMs, uncovering critical insights and limitations in their logical reasoning capabilities shows gaps of recent benchmarks, for NM.

Samsung Research and Development Institute, Noida, India | Research & Development Intern Jan 2022 – Jun 2022
• Built a software engineering module for failure call log analysis to capable of processing one million logs in parallel in under a second.
• Automated process and decreased the human intervention by a staggering 68% by using analysis to improve call quality.
• Developed an ML system to categorize possible call failures based on predefined threshold domains, reducing call failures by 25%.

Sudeep Tanwar’s Research Lab, Ahmedabad, India | Research Assistant Aug 2020 – Aug 2022
• Led and developed DL-GuesS, a hybrid framework for cryptocurrency price prediction, which incorporated interdependencies among
various cryptocurrencies and market sentiments utilizing GRU and LSTM hybrid model to enhance predictive accuracy of Litecoin.
• Integrated price history and social media sentiment from platforms like Twitter to improve the accuracy of cryptocurrency price
predictions, applying problem-solving skills to solve the volatile and stochastic nature of prices and emphasizing reliable forecasting.
• Conducted extensive model validation using various loss functions, ensuring reliability achieves average validation accuracy of 85%.
• Implemented gradient encryption in federated learning (FL) to protect user privacy in autonomous vehicle (AV) learning ecosystems,
reducing data transfer by nearly three times compared to traditional FL methods. Built a sign recognition system using CNN algorithm
and GeFL, achieving 98% accuracy, achieving 2% higher than conventional FL-based systems with secured optimized framework.

PROJECTS:

Interactive PDF Chatbot for Extractive QA


• Developed an interactive innovative PDF chatbot using Stream lit, integrating advanced RAG techniques such as Conversational
Retrieval Chain and session management with Conversation Buffer Memory. Leveraged Mistral 7B LLM and FAISS vector datastore
for efficient document processing and retrieval, enhancing accuracy and relevance in communication responses.
• Optimized document processing by employing PyPDF Loader and Recursive Character Text Splitter to extract and segment PDF
content effectively. Implemented FAISS for robust document storage and retrieval, ensuring precise answers from document.

Cloud-Based Image Recognition System


• Developed a deep learning-based image recognition system on AWS, capable of processing up to 100 images per minute, resulting
in a 75% reduction in processing time compared to previous systems. Utilized auto-scaling to manage varying workloads efficiently.
• Implemented Amazon S3 for secure and scalable storage, enabling data transfer rates of1GB per second handling 20 queries/second.

Publications: Contributed to significant innovations in NLP and AI, with research publications cited 245+ times according to Google Scholar.

You might also like