This project implements a Unified User Interface for AFM Defect Classification and LLM-based AFM Assistant.
-
Defect Classification: Utilizes a fine-tuned VGG16 model to classify AFM images into one of four categories:
good_imagesImaging ArtifactNot TrackingTip Contamination
-
LLM-based AFM Assistant: Provides a multi-turn conversation interface for users to ask questions about AFM images and defects.
- Supports OpenAI and Anthropic LLMs
- Allows users to ask follow-up questions and get recommendations for corrective action
- Clone the repository:
git clone <repository-url>
cd <repository-name>- Create and activate a virtual environment (recommended):
python -m venv afm_llm
source afm_llm/bin/activate # On Unix/macOS
# or
.\afm_llm\Scripts\activate # On Windows- Install required dependencies:
pip install -r requirements.txt- Create a
.envfile in the project root and add your API keys:
OPENAI_API_KEY=<your_openai_api_key>
ANTHROPIC_API_KEY=<your_anthropic_api_key>streamlit run app.py- Generate 50 detailed questions related to AFM image defects focusing on multiple sample types, scanning parameters, and defect types.
- Generate answers using GPT-4o, Claude-3.5-sonnet.
- Generate answers using Gemini 2.0 Flash, Claude 3.7 sonnet and GPT-o3-mini models.
- Setup Label Studio for evaluation of the answers.
- Get evaluation scores from AFM experts for the answers.
- Work on UI for AFM Conversational Chatbot.
- Work to make the UI better.
