Abstract
Purpose
This viewpoint aims to examine how generative artificial intelligence (GenAI) robots shape interaction, neutrality, and trust in “third places” (e.g., cafés, libraries, and co-working hubs) within polarized societies. The authors argue that these technologies function as symbolic actors and performative infrastructures whose design and behavior are interpreted through identity-sensitive lenses. This paper introduces the personalization–polarization paradox, showing how well-meant personalization can fragment shared spaces. The authors aim to reframe trust as a collective, contextual and political phenomenon and propose a design shift from optimization toward inclusion, equity and participatory governance in civic-facing service environments.
Design/methodology/approach
The authors integrate service ecosystem theory, technology affordance theory and third-place theory to build a conceptual framework. A five-dimensional lens consisting of sociability, social support, experiential engagement, restoration, and commercial integration explains how artificial intelligence (AI) mediates value co-creation in contested settings. A typology maps roles, affordances, and trust dilemmas, across physical, virtual and hybrid third places. Illustrative real-world cases (e.g., inclusive art installations and AI safety systems in recreation centers) demonstrate how identical technical features can be read differently across publics.
Findings
GenAI robots in third places are not neutral interfaces; they are symbolic actors whose cues (voice, accent, form, and behavior) signal inclusion, exclusion or power. Trust depends less on technical accuracy than on symbolic fit with local norms and identities. Personalization can heighten satisfaction for some while signaling partiality to others, thereby amplifying fragmentation. Affordances are context-contingent: the same behavior (e.g., proactive greeting) may build or erode trust across groups and settings. The authors provide a typology of trust dilemmas and clarify the conditions under which AI supports sociability, care, restoration, and fair commercial interactions.
Practical implications
Design AI for trust, not just efficiency: prioritize neutral-yet-warm greetings, multilingual accessibility, and clear role signaling. Minimize perceived surveillance; make data use visible, simple, and opt-in. Calibrate personalization to avoid unequal recognition; test for fairness in greeting, queuing, and recommender logic. Use participatory co-design with local communities to tune tone, embodiment, and interaction scripts. Conduct “symbolic equity” audits alongside privacy and security reviews. Provide low-tech pathways (human fallback and analog signage) to reduce digital privilege. Treat deployments as civic design decisions, with governance, staff training, and accountability shared across stakeholders.
Social implications
Third places are part of social infrastructure. GenAI robots' deployments can reinforce or repair the civic fabric by shaping belonging, visibility, and informal interaction. Without care, personalization, and monitoring may exacerbate inequities, stratify access and undermine perceived neutrality especially for marginalized groups. Inclusive governance, transparency, and symbolic equity are essential to protect ambient cohesion (weak ties and casual presence) that supports democratic life. Designing for restoration and low-pressure participation can expand access for digitally hesitant or identity-vulnerable publics, preserving the civic character of shared spaces amid polarization.
Originality/value
This paper centers polarization as a core lens for service research in third places and reconceptualizes AI as performative civic infrastructure. It integrates three theoretical traditions to explain why identical technologies are read differently across publics. The authors articulate the personalization–polarization paradox, redefine trust as collective and contextual and offer a five-dimension framework with a cross-format typology (physical/virtual/hybrid). The contribution should guide researchers and practitioners toward inclusive, symbolically aware AI in everyday civic spaces