I design and prototype human-centered AI systems that combine emotional intelligence, adaptive personalization, and conversational learning. This includes MeliWorld, a bilingual AI companion with voice interaction, safety screening pipelines, therapeutic redirection, and age-based response logic. I work across a multimodal AI tool stack—Groq/Llama 3, Claude, ChatGPT, Meta AI, and Photoshop AI—to create and refine safe, emotionally supportive AI characters, stories, and experiences for children and families.
MeliWorld is an emotionally intelligent AI companion designed to comfort, guide, and engage young children. The system blends voice, text, animation, and emotional heuristics to create a safe, supportive environment.
My goal was to explore how AI can recognize emotion, respond with empathy, and use multimodal cues such as tone, pacing, and expressive visuals to support children’s emotional well-being.
I designed MeliWorld so the assistant always responds through the lens of care, emotional validation, and gentle support. This included:
• Tone-matched replies
• Gentle phrasing
• Adaptive warmth
• Emotion-validating patterns
• Sensitive-topic guardrails
• Safe redirects
• Predictable boundaries
• Crisis-reduction language
• Voice pacing control
• Expressive inflection
• Micro-animations
• Age-tiered messaging
To ensure emotionally congruent and developmentally safe interactions, I built a lightweight emotional-reasoning framework for MeliWorld. The system interprets the child’s emotional cues, maps them to supportive categories, and adapts tone, pacing, and safety filters in real time.
MeliWorld combines multiple modalities to create emotionally present, developmentally attuned interactions.
The goal was to make each interaction feel like a small moment of connection, not just language output.
I collaborated directly with generative models to prototype expressive behaviors, refine emotional tone, and craft age-appropriate narrative interactions. This created an iterative workflow of:
MeliWorld demonstrates how emotionally aligned AI can support children through safe, expressive, and developmentally aware interactions. The project shaped my design philosophy: AI should enhance emotional well-being through expressive cues, thoughtful constraints, and deeply human-centered design
How I design emotionally intelligent, safe, and modular AI systems using agentic thinking, human-centered design, and cognitive psychology.
Agentic AI • Human–AI Interaction • Emotional AI • Safety & GuardrailsMy approach to agentic AI combines system architecture, instructional design, cognitive psychology, and UX into a unified framework for creating safe, adaptive, emotionally intelligent AI behaviors. I don’t “train” foundation models; I design the constraints, guardrails, and behavioral scaffolding that shape how AI systems behave.
Each agent has:
I determine when agents operate sequentially or in parallel based on dependency, safety, and context.
This document introduces the philosophy and guiding principles that shape my case studies and design work. It provides the lens through which I approach learning, AI, storytelling, and emotionally intelligent experiences.
As a designer, my north star is simple: make technology feel human, joyful, and supportive — especially when it’s powered by AI.
My work combines learning science, cognitive psychology, storytelling, and playful design to create products that not only solve problems but actively lift people up.
Playful interactions create emotional resonance, reduce anxiety, increase confidence, and deepen learning.
AI should feel supportive — not mechanical. I design systems that recognize emotional context and respond with care, clarity, and developmental appropriateness.
Designing AI is designing behaviors — not just screens. Safety, clarity, and intuitive flow guide every interaction pattern I create.
My background in Instructional Design & Technology and Educational Psychology shapes every learning experience I build.
Good design teaches — great design teaches without the user realizing it.
Stories build emotional trust, reduce intimidation, and transform functional tasks into meaningful experiences. Characters help guide users through complex or unfamiliar territory.
I prototype early and often — sometimes dozens of iterations in a single day. Rapid exploration helps refine tone, emotional response, workflow, and narrative clarity.
I believe the future of design lies at the intersection of emotion, intelligence, and play — and my work is dedicated to building experiences that make people feel supported, inspired, and delighted to learn.
I design AI systems by combining human-centered UX, cognitive psychology, and technical systems thinking. Rather than treating AI as a feature, I treat it as a behavioral architecture—a system that must be predictable, emotionally intelligent, safe, and deeply aligned with user needs.
Across projects like MeliWorld AI, my approach integrates:
I believe the future of digital products is not just interface design—it is intelligence design.
My work explores how to build AI systems that are safe, emotionally aware, deeply human, and technically sound.
Read the AI Behavioral Architecture - Click HereGreat AI experiences begin with emotional intelligence, not speed. Every interaction should consider the user’s feelings, context, age, and intent before generating an answer.
Empathy is a design requirement — not an optional feature.
AI should never “create its way into danger.” A layered safety pipeline — detection, redirection, reassurance, and cooldown logic — protects users before the model responds.
User emotional and physical safety always comes first.
Children and adults rely on consistency. AI should clearly signal what it can do, what it won’t do, and why — creating a predictable rhythm of interaction.
Trust grows when the system remains stable... even when the input does not.
AI should not replace human support; it should guide people toward it when needed.
Escalation, grounding, and “gentle redirect” patterns ensure AI remains a companion, not a clinician.
Human connection is the safety net AI should always honor.
AI must adapt to the user’s developmental stage, literacy skill, stress level, and attention window.
Shorter messages for younger users, chunked steps for overwhelmed users, and simple choice structures all support healthy engagement.
Good AI makes thinking easier, not harder.
Use only the minimal information needed to personalize an experience.
Adapt tone, pacing, and content based on interaction — not intrusive data collection.
The user should feel known… never watched.
Design conversational repair strategies (“Did you mean…?”, “Let me try again.”) that keep users confident, understood, and emotionally supported.
Trust grows when mistakes are met with grace.
The purpose of AI is not to automate conversation — it is to enrich human capability.
Whether a child learning through play or an adult exploring creativity, AI should support curiosity, connection, and well-being.
AI should amplify what makes us human, not replace it.
Meli’s World is designed to help children feel supported, understood, and emotionally safe through a bilingual, emotionally intelligent digital companion
Designed with advanced Generative AI (Groq/Llama3, Claude AI, ChatGPT) and a responsive HTML/CSS/JavaScript front end, Meli’s World delivers a safe, engaging, and developmentally aware experience that grows with the child. Available on: Laptop, tablet, and mobile — with a mobile app in development supporting natural speech-to-speech and text-to-speech conversation.
💖 Core Experience Differentiators
eBook - Meli's Grand Adventure
Paperback - Meli's Grand Adventure
Paperback - Meli's Grand Adventure Coloring Book
Children's Paperback Book - Meli's Grand Adventure
The following pages are excerpts from the paperback book . . .
Children's Coloring Book - Meli's Grand Adventure
I have also created a children's coloring book for children ages 3-8 years old (soon to be published). It is based on the eBook called "Meli's Grand Adventure". It is titled "Meli's Grand Adventure: Coloring Book". The following pages are excerpts from the book . . .
Click Here to View Meli's Website
A technical learning, network simulation where users configure Dynamic NAT/PAT in a command-line interface. The focus was on usability testing, instructional clarity, and improving learner success in a high-stakes IT task.
Role: UX Researcher, Instructional Designer, Flow Architect
Tools: RouterSim, Java, JavaScript, Figma, Adobe Illustrator
Learners were consistently failing a key CLI simulation in RouterSim related to NAT configuration. Our goal was to identify where learners struggled, measure behavior patterns during CLI input, and redesign the instructional flow to improve comprehension and task success.
“The senior network administrator at Gadget Research Company needs you to set up NAT on a network. Ensure that all internal users can access the Internet. You have a company of 20 users. EIGRP has been configured on internal routers.”
Generated an assment report of the user's actions. Focused on common problem areas:
| Insight | Impact |
|---|---|
| Users skipped or miswrote NAT access lists | NAT failed silently, leading to frustration |
| “Overload” keyword often omitted | No NAT translation, blocking Internet access |
| Inside/outside interface mislabeling | Broke task flow early, increasing abandon rates |
| Guided syntax helped learners succeed | Reduced trial-and-error input and frustration |
Iterative redesign and programming development produced the following results:
This project reinforced the value of bridging UX research and instructional design. By analyzing real user behavior at a keystroke level, we uncovered not just what went wrong—but why. That led to targeted improvements in both content design and simulation behavior.
A legal content system used by lawyers, contractors, and project managers needed modernization. I led the redesign and development of an intuitive, responsive web experience that made complex construction law topics easy to find and use.
Role: UX Designer, Developer, Content Strategist
Tools: HTML5, CSS3, JavaScript, Adobe XD, Balsamiq
The original application was difficult to navigate, lacked search capabilities, and had inconsistent terminology. I interviewed key user groups to understand their needs and friction points, and identified major usability gaps.
| Finding | Design Response |
|---|---|
| Users forgot key legal definitions | Added glossary popups and hover terms |
| Lawyers needed dual-context access | Built side-by-side tabs: Law vs. Examples |
| Scrolling fatigue | Implemented collapsible menus and sticky nav |
| Overwhelming text blocks | Chunked content into clear, skimmable cards |
This project showed how UX principles can simplify high-stakes legal information. Understanding domain-specific workflows helped me design a tool that was both functional and approachable.
I created and published a children’s storybook and companion coloring book using generative AI tools. This tested the potential of AI for storytelling, illustration, layout, and multi-format publishing.
Role: Product Designer, Illustrator (AI-powered), Publisher
Tools: ChatGPT, DALL·E, Figma, KDP (Amazon), Canva
| Challenge | Solution |
|---|---|
| Maintaining character consistency | Used prompt stacking with reference constraints |
| Making child-friendly images | Emphasized white space, soft color palettes |
| Ensuring print quality | Used 300 DPI, vector-style, and line-only for coloring pages |
| Extending brand experience | Created a story-driven website and Facebook page |
This project merged creativity, storytelling, and UX strategy—powered by generative AI. It deepened my understanding of how emerging tools can support lean content production and user-centered design.
A pre-application intelligence system that evaluates job–resume fit, tells you whether to apply, and shows you exactly what to improve before you do. Designed by someone who built it because he needed it.
PRISM began as a response to a problem I experienced firsthand. I was already doing what most job seekers consider due diligence — screening roles carefully, visually inspecting job descriptions, and using AI-assisted resume tailoring to match language and optimize for ATS. I was doing manually what PRISM does systematically. And I was still not getting the outcomes I expected.
That raised an uncomfortable question I couldn't ignore: if careful manual analysis and AI-assisted optimization don't guarantee outcomes, why build a tool to automate them? The answer, I realized, wasn't about the resume at all. It was about decision quality — the layer that comes before any optimization. The tools were doing their job. The missing piece was a system that made the reasoning visible, consistent, and improvable over time.
This gave me a research foundation most designers don't have: I was simultaneously the designer, the user, and the primary research participant. Every application became a data point. Every outcome — expected or not — sharpened the problem definition. The insight was real, and that specificity is exactly what PRISM is built on.
The job application market is flooded with tools that help candidates write better resumes, generate cover letters, and optimize for ATS keywords. These tools solve a surface problem — how do I present myself? — while ignoring the more fundamental question: Should I be applying to this role at all?
The core dysfunction: candidates treat job applications as a volume game, applying broadly and hoping something connects. This wastes time, erodes confidence, and generates no useful signal. The missing layer is a decision-first AI system that performs strategic analysis before the resume is ever submitted.
PRISM is a decision-support system, not a content generator. That distinction shaped every design choice. The following six principles guided the system's behavioral and interaction design:
PRISM is a four-stage agentic pipeline. Each stage has a specific responsibility, bounded inputs and outputs, and feeds deterministically into the next. The architecture is designed so that the Decision Engine is never exposed to raw, unprocessed text — it only receives structured signals.
The Decision Engine is the core intelligence layer. It evaluates four weighted factors:
Each analysis produces a confidence band, not a binary score. Critically, the same score produces different recommendations depending on context — market conditions, gap severity, and portfolio evidence. Here's how the verdict UI communicates that:
The same 74% score produces a different recommendation depending on what surrounds it:
"That becomes much more intelligent than: 'You match 74%.' Because humans don't make decisions based on one scalar number — they think in risk, effort, upside, credibility, and transferability."
The interaction model prioritizes low friction at entry and high clarity at output. Key decisions:
Structured forms (dropdowns for industry, role level, years of experience) create cognitive overhead before the user has received any value. Research on form abandonment shows that pre-value friction dramatically reduces completion. PRISM accepts raw text and extracts structure itself — the user experiences zero setup cost.
A single score (e.g., "72% match") implies false precision and invites gaming. A confidence band with labeled dimensions (skills, experience, gaps) communicates both the verdict and its reasoning, allowing the user to interrogate the result rather than just accept it.
Not all gaps are equal. A missing certification is improvable; a missing 10 years of C-suite experience is not. PRISM distinguishes between blocking gaps (reconsider applying) and improvable gaps (address in cover letter or resume before submitting). This distinction changes the user's action — which is the entire point.
Deliberately out of scope for v1. Cover letter generation already exists. The design risk was scope creep — adding a feature that dilutes PRISM's decision-first identity. The v1 constraint is a design choice, not a limitation.
PRISM's success metrics are intentionally different from the tools it's designed to replace. Traditional resume tools measure output volume. PRISM measures decision quality.
A system that tells users to apply with 80% confidence — and they don't get interviews — destroys trust immediately. The solution is explicit epistemic humility in the UI: confidence bands are labeled as probabilistic estimates, not guarantees. The system explains what it can and cannot see.
LLM responses to unstructured job analysis prompts are inconsistent without rigid scaffolding. The solution was structured prompt engineering with explicit output schemas — forcing the model to return JSON with defined fields rather than narrative prose, then translating that structure into the visual output layer.
Many job postings are vague. "5+ years of experience" with no context. "Strong communication skills." PRISM must surface when the input quality is too low to generate a reliable signal — and communicate that limitation honestly rather than fabricating confident-sounding output.
The natural pull in any resume-adjacent tool is toward feature bloat: add a cover letter generator, a LinkedIn optimizer, a portfolio reviewer. Every one of these features was considered and intentionally deferred. PRISM's core value is decision clarity — everything else dilutes it.
PRISM is the most personally grounded system I have designed. Every design decision was tested against lived experience — not a user persona, but the actual friction of being a senior designer navigating a competitive and often opaque hiring market.
What the project demonstrates beyond its surface function is a design philosophy I've developed across 15+ years of system work: the most valuable AI systems don't generate content — they generate clarity. They take ambiguous, high-stakes situations and give people the structured signal they need to act with confidence.
PRISM is also a statement about what AI product design at the system level looks like. It required not just interface design, but prompt architecture, output schema design, confidence modeling, interaction logic for ambiguous inputs, and a principled definition of what the system should refuse to do. That is the work I want to keep doing.
MeliWorld AI is a conversational companion designed to provide emotionally supportive, developmentally appropriate interactions for children. I designed the conversational system architecture, including emotional signal detection, hybrid safety routing, and age-adaptive UX patterns that make probabilistic AI behavior predictable, safe, and trustworthy.
The system combines deterministic UX logic (emotion detection, safety rules, and behavioral constraints) with LLM-based generation to balance creativity with control.
It operates through modular, agent-like workflows (Emotion, Safety, Story, Translation) that interpret user input, apply safety constraints, and generate context-aware responses in real time.
This architecture ensures that each interaction is emotionally aligned, developmentally appropriate, and consistently safe across unpredictable user input.
From single-response chatbots to structured, safety-aware agentic systems.
This hybrid architecture separates emotionally sensitive inputs from open-ended generative interaction, ensuring predictable emotional safety while preserving conversational richness and creative flexibility.
I designed this architecture using a modular conversational routing model that balances emotional safety, response appropriateness, and generative flexibility—an approach aligned with emerging best practices used in production-scale conversational AI systems.
Role: Product Designer, Conversational UX Architect, System Interaction Designer
How user input is analyzed, routed, and transformed into a final response.
From emotion detection to hybrid routing—delivering safe, adaptive responses.
Web prototype (text + voice capable)
Conversational architecture, emotional routing framework, prompt scaffolding, age-adaptive interaction design
Emotional intelligence, safety guardrails, prompt architecture, conversational routing and agentic orchestration
LLMs can sound empathetic, but their behavior is probabilistic and can vary in tone, appropriateness, and helpfulness. For children (and emotionally sensitive moments), inconsistency can reduce trust and create safety risks. The design challenge was to create a system that delivers emotionally reliable responses while still enabling creative, open-ended conversation.
Two-Layer Response Model (Hybrid Routing)
Layer 1 — Deterministic Emotional Safety Layer: detects emotional signals (e.g., “sad,” “scared,” “lonely”) and routes to pre-validated supportive response patterns for predictable safety.
Layer 2 — Generative AI Layer: handles open-ended conversation (curiosity, storytelling, questions) when the input is not flagged as emotionally sensitive, constrained by prompt scaffolding and tone rules.
User input → emotional keyword detection → classification → response pattern selection → delivery
This routing approach improves emotional reliability and reduces unexpected responses during sensitive moments.
| Challenge | Solution |
|---|---|
| Emotional vulnerability in user inputs | Deterministic “safe path” routing using an emotional keyword library and pre-validated supportive responses |
| Unpredictable AI behavior | Hybrid architecture: safety layer first, generative layer second (with prompt scaffolding constraints) |
| Different developmental needs across ages | Age-based adaptation for language complexity, response length, and UI readability |
| Trust and clarity in conversation | Consistent personality/tone + structured responses that help users form reliable mental models |
| Making AI feel supportive without overreach | Designed boundaries and supportive “next step” prompts |
MeliWorld reinforced that conversational AI design is a behavioral design. The model matters, but reliability and trust come from routing, constraints, prompt scaffolding, and interaction patterns that make probabilistic systems feel understandable and safe.
The following are UX/Instructional writing samples that are derived from user guides, websites, and software applications that I was involved in their creation. Click on the corresponding button next to the title to view the related pdf file.
The Colorado Avalanche were Stanley Cup champions in the 2021-2022 season. To commemorate their 2021-2022 season, dashboards were created that detail their month-to-month, regular season, win record, from October 2021 to April 2022. This mobile app was created with Figma . . . (Click arrows
to expand window)
Wedding Photographer
I created a high fidelity mobile app design based on the work of a professional photographer. I first wanted to provide striking images that could be reached from the main screen. I inserted a small visual guide, informing users that they could scroll vertically down the screen. I also wanted to provide vital information about the business, such as pricing. I created a section on valuable FAQs, which displays a scrollable menu and by simply clicking on a question, a response to that specific FAQ would display on the right side of the screen. This prototype was created with Figma . . . (Click arrows
to expand window)
This is a mobile app for a company that provides over 200 pre-employment tests. This app is easy to use via scrollable menus. This was created with Figma . . . (Click arrows
to expand window)
Resource Associates (https://www.resourceassociates.com) offers over 200 pre-employment tests available on demand on their web site. Users can obtain a rich amount of information to aid in the selection of a test. I designed the following wireframe that forms the basis of a mobile app that I developed.
The wireframe prototype was developed with Axure RP.
At XYZ.com, the original manner of vendor onboarding was where employees would manually enter information into the system, from data sheets. This process was bogged down by duplication and vendor errors. This prototype provides a quicker process for determining which documents that vendors should obtain and fill out. After each question is answered (Yes or No), the next question is automatically displayed, based on the vendor's choice.
This prototype was created with Figma . . . (Click arrows
to expand window)
Visually, the solution is ostensibly simple, however, the app had to account for all possible vendor responses. You can see that in the image below (lines and arrows) with a connection map.
I have been touched by the efforts of groups throughout the world that tirelessly work toward saving our oceans and planet, and ameliorating our climate crises. I wanted to find a brief but comprehensive way present to visuals, related text, and links to organizations throughout the world. Therefore, I created a prototype with Figma . . . (Click arrows
to expand window)
Meli's Grand Adventure
This web site is based on a paperback book titled "Meli's Grand Adventure". It is about Meli, the silky terrier, on her heartwarming journey to a new home from Pennsylania to Colorado.
IRAM
This web site is based on a desktop application created years earlier. There was a need to convert this desktop application into a web site using traditional front end tools. Without this web site there would be no way to explore and retreive digital information on the subject matter. This site provides a means by which construction owners and contractors can easily access all of the information required to identify, recognize, analyze and manage potential construction contract claim and dispute situations. The site was constructed using HTML5, CSS3, and JavaScript. Glossaries are provided. A full web site search function is available.
Use the username (esteem) and password (456) to access the IRAM website.
Water Research Foundation
The Water Research Foundation (WRF) is the leading research organization advancing the science of all water to meet the evolving needs of its subscribers and the water sector. This site was constructed using WordPress.
RouterSim Web Site
I created and have maintained the RouterSim web site from its inception. This site was recently revamped with Bootstrap and JavaScript.
My Portfolio
I recently revised my portfolio website with Bootstrap, HTML5, CSS3, JavaScript, and JQuery. The original site was menu driven only and displayed content on several pages. Users can navigate the current site by using the menu on the right side of the screen or vertically scrolling down the page with a mouse.
The current site presents several new features:
Western Pacific Insurance
This revamped web site was modified with Bootstrap 4.0.
With the aspect of creating a software application that allows users to interact and learn with a simulator (virtual network), I wanted to develop a meaningful and systematic approach to solving problems. I wanted to use a framework that was very user-oriented and promoted user empathy. Two other requirements were a non-linear process that allows for multiple iterations in examining issues and problems. Design Thinking and human-centered design were chosen as the methodologies that would be used. They were employed in the design and development of two dozen RouterSim products. Both methods share several similarities:
Several employees were invited to the Design Thinking meetings:
The following, typical areas were dealt with in the Design Thinking processes.
The first part of Design Thinking involved conducting ethnographic research. I observed actual users and examined learning methods that they currently employed in learning Cisco material. I was careful to not prematurely make assumptions and draw conclusions. Journey maps and empathy maps were used in tracking the interaction that users had with their learning environment. It helped identify needs that users were often unable to articulate. A subject matter expert was used so that their input could also be used to identify user needs. It was observed that users:
This was frustrating for users and limited learning experiences and the development of cognitive flexibility. I informally met with small groups who expressed their feelings and we discussed issues. They were encouraged to "storytell" their experiences. I used active listening and empathy while interacting with people.
Define
Card sorting was performed to separate and categorize either problems discussed with users or issues that might be encountered in the development and use of a RouterSim product. Sticky notes were put on a large board and continuously added to and/or rearranged if necessary. The top 5 areas (pain points) of interest were:
The problem statement is "how does the interface look like that is pleasing to the eye, intutive and functional?"
The problem statement is "are there specific steps in building a network (i.e., click on a device button to display the device on the Network Visualizr screen, then connect devices)?"
The problem statement is "how do you use the step-by-step online documentation in conjunction to configuring each device on the screen?"
The problem statement is "Based on suggested features, what is the flow of the program from one screen to another, that is intuitive for user?"
The problem statement is "after going through step-by-step instruction, how are user's problem-solving skills assessed?"
Based on the three levels of software programs that RouterSim wanted to create, plus, the user observation and gathered data, three personas were created. This referred to applications for Cisco CCENT, CCNA, and CCNP.
Ideation
A problem statement was the starting point in visualizing possible solutions. Brainstorming was employed to generate ideas, no matter if they were good or bad. Free association was supported so that ideas could flow without judgement or interruption. Card sorting and sticky notes on a board was used to categorize ideas. The top five or six ideas were used in the creation of prototypes.
Features
There was a special place in this Design Thinking process, the creation of product features. This was an integral process that influenced the success and acceptance of products. I brainstormed and identified possible areas of development:
In examining possible solutions, step-by-step storyboards were created so that the program workflow(s) would be easier to conceptualized and visualized. Information was taken from sticky notes on a white board. After viewing storyboard(s) It was common for Design Thinking participants to either add and/or remove items from the white board. Essentially, this process allowed us to flush out the storyboard(s) into a smoother and more understandable journey.
User Journey Map
In working with current and future users of Cisco learning material it was necessary to create user journey maps. In conjunction with the creation of storyboards, and pre-prototyping phases, the creation of user journey maps allowed me to more closely correlate feelings, thoughts, and potential user interaction with existing prototypes or future ones.
Prototype
One purpose of one or more prototypes is the generation of new ideas. Depending on how many items and/or problems were flushed out, storyboards, mockups, low-fidelity protypes, and high-fidelity were created. Users and teammates would continually provide feedback. Potential customers were first shown paper prototypes. This was based on using sticky notes being reviewed and modified. Prototypes provided a way to see if there were gaps in team thinking and/or if something was missed.
Testing
Each potential product was divided into several content areas that was explored within a lean process that provided a minimal viable product (MVP). A minimal amount of the interface, features, and functionality were created and tested by users. There were four main areas that were tested:
Each of the five main areas of Design Thinking was open to iteration so that prior areas could be revisited when new ideas or insights were generated. A non-linear format was supported so that any part of the design process could re-examined. For example, prior assumptions were re-visited to validate the existence of sufficient and valid evidence. Through iteration, it became clearer what part(s) would be examined (or re-examined) for increased clarity and usefulness.
Through iteration, it became clearer what part(s) would be examined (or re-examined) for increased clarity aand usefulness.
WCAG - UX/UI Best Practices Guide
While employed at an earlier position, I was asked to create a WCAG - UX/UI best practices guide. While not being comprehensive in the inclusion of all topics, it provides a general guide to WCAG readibility issues and other user-related interaction.
UX Research - Methods and Strategies
When I co-founded RouterSim we had a great deal of work that was needed to substantiate our hypotheses that network simulators would be a viable alternative to real Cisco routers and switches. The following UX research strategies were utilized in gathering user information. Click on any of the following methods and strategies to learn how they were used in UX research.
Concept testing is a key aspect of the human-centered design process and it was utilized by RouterSim. When the initial idea of having a router simulator was spawned, there was no way to intuitively know if such a concept would be accepted. There were none in existent. I conducted ethnograpic studies and observed actual users and examined learning methods that they currently employed in learning Cisco material. I was careful to not prematurely make assumptions and draw conclusions.
I created quantitative surveys and presented them to people in the IT field. People were asked a series of questions about the effort and cost of preparing for a Cisco certification exam. This not only included questions about the use of books but the actual purchase of routers and switches; A very expensive proposition. Feedback from potential users indicated that a low cost simulator would be a boon to the industry.
When RouterSim was created, no other company had a network simulator for people in school. Therefore, there was no extensive competitor analysis completed. However, Cisco textbooks were available which provided step-by-step instruction. In order to practice you would have to have access to real equipment which is potentially expensive.
At a later date another company started producing a simulator and we had to periodically conduct competitor analysis on their products. Eventually Cisco produced a simulator and we again needed to periodically conduct competitor analysis. RouterSim interfaces were developed based on our theoretical approach and deficiencies that were noted in competitor products.
When new products were planned, I gathered a great deal of information about potential users. I wanted to create and present relevant information based on the background, knowledge level, and goals of the user. Another key factor was that a user may be studying for a specific Cisco certification exam and a specific technical level.
I created surveys based on the 5-point Likert Scale. A Likert scale is a psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research.
Observation was recorded at various points of discovery and usability. I observed how people studied and prepared for Cisco exams by using books and real equipment. I conducted remote moderated and unmoderated research. This was usually one-on-one interaction in which I watched the user in the course of the user's normal study activities and discussed those activities with the user.
I recorded "pain points", points of motivated behavior, and times of frustration. When presented with prototypes, people would be video taped.
I interviewed people that were studying for their Cisco certification exam or had recently taken it. I wanted to investigate their experiences and especially their feelings. I used a survey with the Likert scale plus an open ended one. The following are common themes reported:
I did a deeper dive in understanding how well users could deal with networking issues. Ie wanted to examine the speed at which a user could successfully accomplish tasks. I constructed assessments that were drawn from current literature of the Cisco areas of CCENT, CCNA, and CCNP. Users were presented with topics that had increasing levels of complexity and possible responses (i.e., single multiple-choice response, more than one correct answer, etc.). I examined the data and saw three main see clusters of information highly correlated with specific levels of Cisco-related accomplishment and/or expertise. This allowed me to establish three baselines for the creation of RouterSim software.
I have worked in a Kanban environment and because of the natue of the products that have been developed a waterfall methodoligy would not be compatible with the development tasks. I have worked with users in continuous iterate, usability tasks.
This is closely aligned with RITE (rapid iterative testing and evaluation). I see this approach as more focused testing which is a usabity testing method that usually involves a small number of participants. The key premise is to identify not just usability problems, but also to react quickly to identified issues and test new solutions. There could be multiple RITE episodes within the broader concept of Design Thinking testing. I went through several iterations with think-aloud exercises.
I examined common factors among potential users:
I did a deep dive into understanding how well users could deal with networking issues. We wanted to examine the speed at which a user could successfully accomplish tasks. I constructed assessments that were drawn from current literature of the Cisco areas of CCENT, CCNA, and CCNP. Users were presented with topics that had increasing levels of complexity and possible responses (i.e., single multiple-choice response, more than one correct answer, etc.). I examined the data and saw three main clusters of information that highly correlated with specific levels of Cisco-related accomplishment and/or expertise. This allowed us to establish three baselines for the creation of RouterSim software.
Based on personas that were established, customer journey maps were created. These were end-to-end encapsulations of what a customer (user) thought, felt, and behaved through “touchpoints” in their journey. Graham (2007) stated that “touchpoints are the key building blocks of experiences. He further defines episodes, experience and end-to-end experience on the basis of touchpoint.
There were four stakeholders that I interacted with and interviewed. They played key parts in the growth and success of RouterSim. Two were resellers and had specific commercial needs in branding and selling RouterSim software.
Todd L. - Cisco Subject Matter Expert
Neil E. - Publisher at Sybex and John Wiley and Sons
John L. - Professor of Psychology
Takashi S. - President of Logic Vein
Initially I needed to find what aspects of presenting a router and switch simulator to people in the IT field would surface in studying needs. I needed to find out what users actually wanted. Qualitative and quantitative research was conducted. Qualitative research included:
Quantitative research that I conducted included:
The data was also translated into graphic charts and examined. This greatly assisted in the creation of multiple personas which in turn guided us in setting levels of complexity in the step-by-step labs.
I tested the usability of software and documentation. I developed different user personas based on the prior knowledge of users and future goals. Some were new to the world of Cisco hardware and configurations, while others had been working in the IT field for several years. Users were timed on how long it took them to complete tasks using the software. Written and verbal feedback was recorded. I conducted remote moderated and unmoderated research. I re-examined and modified personas, where needed, for each of the over two dozen products that were created.
Mental models of users traversing the navigation options of the software were examined with this technique. The layout and presentation of documentation was also examined with this method. Open and closed sorting were used in looking at clustered data. Information was placed on "sticky notes" and a white board. After viewing storyboard(s), it was common for Design Thinking participants to either add and/or remove items from the white board. Essentially, this process allowed me to flush out the storyboard(s) into a smoother and more understandable journey.
I tested user interest in and engagement with RouterSim software by examining variations of attributes in an A/B format. Screen attributes were examined such as color foreground and background, font style or size and device size. Users were randomly divided into two groups, one a control group and the other were usually presented with a single variant.
Audio responses of users were recorded and data from short questionnaires were garnered. Attitudinal responses to different devices (graphical representations of routers and switches) were examined by the use of a Likert Scale. I conducted remote unmoderated research that automatically timed how long it took users to accomplish tasks.
Examples . . .
All RouterSim products had one thing in common, a Network Visualizer screen where users could build and test virtual networks. At the top of the screen were buttons on a button panel. I had to create icons that best represented actions associated with button. A/B testing testing was utilized to compare several variations of each button. Users were asked to provide feedback as to the intended action of each button after looking at the button icon. This allowed me to closely fashion each button to it's associated action.
Flow Chart
Toward the end of the UX research, based on UI specifications I created a flow chart for the Network Visualizer program. It was important to capture every screen asset (buttons, tabs, link to screens, etc.) so that programmers had a clear idea when they started creating Java code.
Persona
Based on UX research it was decided that three types of eLearning products would be initially developed. Each one would be more difficult and sophisticated. Three distinct personas were developed. The categories for each one were:
User Journey Map
In working with current and future users of Cisco learning material it was necessary to create user journey maps. It allowed RouterSim to more closely match user experiences and potential success with future products. The following user journey involved a person preparing for the Cisco CCNA Cetification exam. They used a prototype of Routersim's product, CCNA Network Visualizer. At various points in the program I captured a user's "Doing", "Thinking", and Emotions." Of interest is the clear report of positive and negative emotions.
Interaction with Stakeholders
There were four stakeholders that I interacted with and interviewed. They played key parts in the growth and success of RouterSim. Two were resellers and had specific commercial needs in branding and selling RouterSim software.
IRAM Site Map (Created with Figma and Autoflow plugin)
This web site provides a means by which construction owners and contractors can easily access
all of the information required to identify, recognize, analyze and manage potential construction
claim and dispute situations.
During the conceptualization and research phases for new products I used my instructional design skills. For each product that I have worked on, I performed needs assessment, gap and tasks analysis. I established user personas, learner profile(s), and learning objectives. It has always been imperative to interact with potential users in an iterative way (Design Thinking) to make sure that the planned application would be engaging, meaningful, and relevant to their professional goals.
With the aspect of creating a software application that allows users to interact and learn with a simulator (virtual network), I wanted to develop a meaningful and systematic approach to solving problems. I wanted to use a framework that was very user-oriented and promoted user empathy. Two other requirements were a non-linear process that allows for multiple iterations in examining issues and problems. Design Thinking and human-centered design were chosen as the methodologies that would be used. They were employed in the design and development of two dozen RouterSim products. Both methods share several similarities:
I used the well-known ADDIE and Design Thinking models. I drew from the works of Rand Spiro (cognitive flexibility theory), Roger Schank (problem-based scenarios), and the Don Kirkpatrick model of learning.
Click Here to View ADDIE ExampleThe following design systems were created for Websites, eCommerce products and MeliWorl AI. Click on the corresponding button next to the title to view the related pdf file.
About Me
I design human-centered AI systems and intelligent product experiences that balance innovation with predictability, safety, and usability.
With over a decade of experience across UX/UI design, instructional design, and product development, I focus on simplifying complex systems into intuitive, usable interactions. My work blends cognitive psychology, learning science, and modern modern AI capabilities to create experiences that feel both powerful and approachable.
I have led the design of more than 24 digital products, including simulation-based environments used by tens of thousands of users. This experience shaped how I approach design—not just as interface creation, but as system design that connects user behavior with underlying technical logic.
More recently, my work has shifted toward conversational AI and agentic systems. I design structured interaction models, safety-aware workflows, and emotionally intelligent experiences—combining deterministic UX logic with constrained generative AI to produce reliable, user-aligned outcomes.
I am especially interested in designing AI systems people can trust—where intelligence is not only powerful, but understandable, predictable, and grounded in real human needs.