AI-Driven Conversational Agent with Data Visualization

Role: UX Lead / Conversational Interface Designer

Context

In this forward-looking initiative, the goal was to build an advanced AI-powered system using ChatGPT and complementary AI technologies to bridge the gap between complex technical manuals, unstructured datasets, and everyday user inquiries.

The result: a powerful chatbot that could not only answer questions contextually but also visualize insights using map-based and graphical dashboards — all accessible via natural voice conversation.

Challenge

Organizations often sit on massive volumes of documentation — PDFs, product manuals, internal knowledge bases — that are poorly indexed, fragmented, and difficult to access or interpret by non-technical users.

The challenge was multi-fold:

Transform unstructured data into vectorized form that a machine learning model could “understand”

Design an intuitive chat experience that could serve a wide range of users, from technical specialists to the general public

Visualize high-volume data dynamically in dashboards

Enable voice interactions for hands-free, guided decision-making

Goals

Build an AI chatbot capable of intelligent, human-like conversations

Extract knowledge from structured/unstructured data using vectorization

Create a seamless experience between chat and data dashboards (e.g. maps, charts)

Enable voice-to-action interaction for task completion Shape

My Process

AI + UX Discovery

Collaborated with AI engineers to define how data would be ingested and transformed into vectorized knowledge bases

Conducted stakeholder workshops to understand common user intents, tasks, and pain points across different personas

Conversational UX & System Architecture

Designed natural, human-like flows for different types of users (novice, expert, support staff)

Created conversation maps, intent models, and fallback scenarios for handling ambiguity

Integrated tools like Whisper (for voice input), OpenAI APIs, and vector databases (e.g. Pinecone, Weaviate)

Prototype, Test & Iterate

Built Figma-based wireframes and interactive prototypes for both the chat interface and dashboard components

Prototyped dashboard layouts for geospatial (map-based) and statistical (graph-based) visualizations

Ran internal usability tests for:

Voice command accuracy

Clarity of chatbot responses

Ease of navigating between conversational and visual modes

Multimodal Integration

Designed seamless transitions between:

Conversational chat

Data dashboards (line/bar/pie/heatmaps)

Voice-activated task completion

Worked closely with the engineering team to QA the voice UX and test for misfire commands, context awareness, and fallback paths

##Impact Created a centralized AI knowledge agent trained on internal documents, manuals, and processes

Users could query complex technical data via natural conversation — eliminating the need to search static documentation

Delivered dynamic visual dashboards that responded to voice/chat inputs

Reduced support overhead by offloading repetitive inquiries to the chatbot

Enabled voice-to-insight interaction for field agents and operators

Tools & technologies

OpenAI GPT-4, LangChain, Pinecone/Weaviate (Vector DBs), Whisper

Figma, Adobe XD (UI design & prototyping) Node.js, REST APIs, WebSockets (for chatbot/backend integration)

Mapbox/D3.js/Chart.js (for visualizations)

Learning & Reflections

This project represented a significant shift from traditional GUI design into conversational AI and multimodal user interaction. It challenged our team to:

Reframe “UI” as voice, tone, and flow — not just screens

Design with ambiguity and user error in mind

Align UX closely with ML engineers to ensure model alignment with real-world user needs

This work expanded my design scope from human-computer interaction to human-AI collaboration, where empathy, context, and clarity are more important than ever.