Search for a command to run...
Purpose This study aims to address the growing challenge of balancing AI complexity with usability in business applications, particularly in the context of generative AI (GAI). As enterprises increasingly embed AI into core processes, issues around user control, transparency and trust hinder adoption. The article seeks to develop a conceptual framework that integrates user experience (UX) design principles with AI capabilities to enhance usability, ethical compliance and user engagement. By aligning AI system design with user needs and business goals, the research provides actionable strategies to foster responsible AI adoption across industries like finance, healthcare and e-commerce. Design/methodology/approach The study follows a Design Science Research methodology, structured across three cycles: relevance, rigor, and design. It includes a systematic literature review, bibliometric analysis and expert interviews with personas like developers, UX designers, product managers and business transformation leads. Scenario-based feedback and pilot implementations were used to empirically validate the framework. The research employed tools like SCOPUS and Bibliometrix for literature curation and network visualization. Data from over 200 participants, including developers and managers, informed the refinement of a UX/UI framework that supports explainability, personalization, transparency and ethical governance for AI-driven business applications. Findings The research resulted in a validated UX/UI framework that bridges AI system complexity with usability and trust. Empirical validation through pilot projects showed improved developer velocity (up to 15%), enhanced customer satisfaction (30%) and higher system usability scores. The study found that incorporating patterns like “automation with control” and “data-driven decision support” led to better user understanding and adoption of AI features. Transparency, explainability and adaptive design were key to reducing resistance, especially among nontechnical users. The framework also integrates ethical principles, bias mitigation, user autonomy and accountability across AI lifecycle stages, enhancing long-term trust and system performance. Originality/value This research presents one of the first empirically validated frameworks that balances AI-driven system complexity with usability using a design science lens. Unlike prior models that focused solely on technical or ethical aspects, this study synthesizes AI evolution, HCI, enterprise readiness and UX theory into a practical, customizable toolkit for organizations. It offers role-based design patterns and metrics for trust, adoption and transparency. The originality lies in merging conceptual rigor with real-world applicability across diverse AI scenarios. It advances academic literature and provides practitioners with a roadmap for responsible AI deployment aligned with strategic, ethical and user-centric objectives.