Understanding Lived Experience: Bridging Artificial Intelligence and Natural Language Processing with Humanities and Social Sciences

True human-centered Artificial Intelligence (AI) is impossible without addressing the inherent and diverse aspects of humanness. Deep learning models have achieved remarkable success in some tasks in vision and language processing, and few can deny it. However, as it moves forward, the field cannot continue to pretend it can do it all by itself, especially when we advertise it as ‘human-centered AI’. It has come the time to open up the stage for methodological pluralism in the interest of critical and democratic science, and for the benefit of society. In this paper, I want to draw particular attention to the aspect of lived (subjective) experience, one research area highly misunderstood and hugely neglected in AI, and especially in Natural Language Processing (NLP). Our intensions, selfhood, autonomy, emotions, feelings, sensory knowledge, cultural history are integral components of our intelligence. Thus, the future AI and NLP models will need to more closely align with the embodied component of human intelligence. As we push the limit of creativity and innovation in AI, we need to develop a new way of looking at human experience, with a better scientific understanding of intelligence and its own practices, at the intersection of many disciplinary fields.


Introduction
In the last decade, Artificial Intelligence (AI) has seen a resurgence in almost all its core areas, including vision, natural language processing (NLP), speech recognition, image and video generation, multi-agent systems, and robotics.This has impacted a wide number of applications in a variety of domains like games, medical diagnosis, autonomous driving, language translation, and interactive personal assistance.People, businesses, and devices alike are contributing to the unprecedented data explosion while computing power has strongly and steadily advanced.Natural language processing, the oldest area of AI, has also seen major advances in quantiative reasoning, question answering, machine translation, text classification, speech recognition, and chatbots, across academic and industry sectors.A notable achievement is represented by large language models like ELMo, GPT, mT5, and BERT [1,2] which can learn grammatical relations, predict word meaning, and capture basic world knowledge from statistically relevant patterns in naturally occurring text.Large quantities of data are used to tune billions of parameters such of language models 1 .By predicting the next word in a sequence, these models can generate new IOP Publishing doi:10.1088/1757-899X/1292/1/012020 2 text with impressive mastery of language, being able to reach close to human levels for various genres like news articles, essays, fiction and computer code.
Such NLP technologies, however, do not come without challenges.The NLP community has struggled to find and use quality data especially for resource-poor languages and for specialized applications.Moreover, these state-of-the-art large language models do not exhibit a deep understanding of the texts they process which limits their applicability.Most research and tools are still mostly focused on narrow benchmark tasks and situated scenarios where "their incomplete mastery still provides value" [3].In addition, these models' exponentially growing size and processing power impose unaffordable computational costs for real-world applications [4].Take, for instance, Google Duplex, a recent over-the-phone natural conversation technology designed for narrow, specific but real world tasks, like restaurant reservations, which was advertised as being able to make the conversation experience as natural as possible, in humanlike voice.However, putting aside the inherent ethical issues 2 , most of such bots today are successful "so long as the customer stays on the 'happy path,' the ideal scenario where the customer says all the right things" [5].Yet, our natural way of communicating is versatile and embodied, multimodal and multisensory [6,7], requiring more seamless coupling between the NLP technology and the human-computer interface design for the tool to find its market purpose and better address customer needs.
The limitations of current language technologies like such conversational interfaces are even more apparent in critical applications like healthcare.Although AI has brought much needed solutions to various biomedical applications like drug discovery and basic health operations, it significantly lacks in dedicated resources in diagnosis and practices of care within and outside clinics and hospitals.The need to provide optimized, individualized, and person-centered care is growing.Thus, addressing these competing needs and complex problems requires novel and creative approaches to health care.Specifically, dedicated AI tools could aid in the development of innovative, effective, and person-centered solutions to health challenges, supporting the realization of a future for health care that is personalized and participatory in nature [8,9,10].
Another major focus has been on the inevitable ethical concerns about the different and inherent types of human and machine bias, as well as about the kind of moral decisions that should guide machine behaviour [11].As expected, these issues have spurred wide debates in the humanities and social sciences circles, with more universities and private companies being forced to consider more seriously such dimensions.However, the dialogue across AI and NLP, and the humanities and social sciences (as well as other fields) has weakened over time.This is an interesting paradox: the more AI is becoming part of our daily life, the more isolated it seems to get.Recent studies which examined data from 1950 to 2018 [12] have shed new light on this tendency of AI to face social and cultural questions on its own, disregarding the importance of and situated expertise in the humanities and social sciences.In fact, the disconnect is mutual as scholars of the social sciences, physical sciences, and humanities appear to be falling behind the quick development of AI.Moreover, these statistics show a shift in AI research domination today, with the more recent concentration of AI research in private industry due to easy access to expensive infrastructure and compute power resources.To these we also add conceptual and financial obstacles that drive these communities even further apart [13].
As a researcher, teacher, and mentor, I have been fortunate to work and collaborate across a variety of disciplines that allowed me to look at AI problems from a wide range of perspectives.I, too, join the growing number of researchers in different fields to argue for the crucial importance of collaborative practices that have the potential to foster greater synergy among diverse communities.Such an approach will have a huge impact on tech innovation, leading to the much needed human-centric vision of future AI technologies.Even though there have been some initiatives that highlighted perspectives from various academic fields as well as across publicprivate sectors to further our understanding of the key emerging issues [14, 13,15], they are still relatively rare.
As such, in this paper, I want to draw particular attention to the aspect of lived (subjective) experience, one research area highly misunderstood and hugely neglected in AI, and especially in NLP.One major reason is the incorrect representation and data assumptions we make in AI and NLP about natural intelligence, which is tightly linked to our bodies.Our intensions, selfhood, autonomy, emotions, feelings, sensory knowledge, cultural history are integral components of our intelligence [16].Thus, the future of AI models will need to more closely align with the embodied component of human intelligence.As we push the limit of creativity and innovation in AI, we need to develop a new way of looking at human experience, with a better scientific understanding of intelligence and its own practices.In spite of recent advancements in the emergent field of embodied intelligence, we are still looking for answers when it comes to issues like how to represent and make sense of human instinct, social awareness and interaction, and human-centric machine collaboration [17].Research along such dimensions will point to directions for creating more robust, trustworthy, and perhaps actually intelligent AI systems which align more closely with the biological aspect of human intelligence.
The correction I am suggesting in this paper is, indeed, intended to foster innovation in the technology sector by diversifying the rather narrowly established practices in AI and NLP.Given the highly interdisciplinary nature of these fields, it is rather odd to see the trends, paradigms, and key insights of other cognate fields (like psychology, philosophy, sensory anthropology, history) which have not been taken on board by AI researchers.As such, we should not disregard the important disciplinary turns and methodologies in other relevant input communities of practice and areas of research [18], especially when it comes to meaning and felt experience.It is thus important for NLP, in particular, to call for relevance -for language technologies to be more open to interdisciplinary collaboration, more accessible, and in effect, socially consumable.
I argue that such a phenomenological approach to AI and NLP has exciting implications for how to design and build semantic systems that get us closer to felt experiences in context -meaning, what people know and feel, and how deeply they are engaged sensorially.It also allows us to investigate the areas of intersection between AI and NLP, on one side, and human communication, perceptual needs and cultural values, on the other.
The paper is structured as follows.I start with a brief history of AI and scientific inquiry as a disembodied, objectivist view of the world, followed by a discussion on the disconnect between this approach to science and human experience (Section 3).In Section 4, I focus in particular on the phenomenon of lived experience, and argue for the crucial role of AI and NLP in its systematic, empirical investigation (Section 5).In Section 6, I outline some challenges, suggest some ways in which AI and NLP and other disciplinary fields can work together toward building such a framework of understanding, and offer some future directions.

Science as a Disembodied, Objectivist View of the World
The Turing test [19] is a famous method of inquiry in AI for determining whether or not a computer system is intelligent -meaning, capable of thinking like a human being.It is attributed to the mathematician Alan Turing whose ideas led to the early versions of modern computing and the foundations of the field.He was, arguably, the first to analyze the role of embodied intelligence, not only locating the complexity of AI in the sophistication of the hardware and software, but also taking into consideration that language is embodied, as part of the larger context of the human-computer interaction.
In 1950, Turing proposed 'The Imitation Game' -an engineering solution to the question "Can Machines Think?" [20].Instead of providing a definition of intelligence, Turing referred to two key intellectual human abilities: reasoning and generating illusions.As such, he suggested to imitate the human intellect in five areas: various games; language learning; language translation; cryptography; and mathematics.Turing defended the thesis that '[t]he extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration'.
Turing's remarks, however, have been largely simplified and overlooked by subsequent AI research.Since the statistical turn of the nineties, AI has shifted more and more toward what is known as 'weak AI', using practical approaches that infer statistical correlations from ample datasets, moving away from the need for a more comprehensive understanding of human experience and behavior [21,22,23].It has been a series of shortcuts, trade-offs, and simplified assumptions that led to the narrow-scope success of current AI systems.While the research community is divided into roughly two camps that have followed different paths towards building intelligent systems (symbolic and non-symbolic AI), it is clear that the current NLP practices tend to focus on scaling and benchmarking [24], while the state-of-the-art models do not have a good grasp on issues like common sense, knowledge acquisition, or how we use language to engage with both our external and internal world.
The cognitive revolution of the 1950's was an intellectual movement that focused on an interdisciplinary study of the mind and its processes.New voices at the time (like Herbert Simon, Noam Chomsky, Marvin Minsky, John McCarthy), put forth ideas that would later serve as the main principles of modern cognitive science, linguistics, and the nascent field of artificial intelligence [25].The main intuition behind it was that human intelligence resembles computation in its essential characteristics.To date, the scientific method is built around testable hypotheses and it relies on facts established through repeated measurements and agreed upon universally by what is considered an independent observer.Here, "observer" refers to a physical system capable of acting as a measuring device (i.e., the third-person view of the world).
Under such a scientific understanding of reality, the observer of the experience is "a disembodied eye, looking objectively at the play of phenomena" [25].The fundamental assumption of this view is that we inhabit a pre-given world with particular properties (e.g., length, color, etc.) which we, as objective observers, represent internally as a set of symbols and manipulation operations on these symbols.Under such a theory, intelligent behavior presupposes the ability of the agent to act by representing relevant features of the pre-given world.The agent's behavior is considered successful, to the extent to which her representation of the world (or situation) is accurate.In other words, the evaluation of such systems relies on the representation and manipulation of symbols as accurately capturing some pre-defined features of the real world, and the information processing leads to a successful solution of the given problem.

The Disconnect between Science and Experience
This tacit, unquestioned commitment to an objectivist, disembodied view of how reality manifests, how we come to know the world, and what we are, has dominated the AI world pretty much from its inception.
Two of the major critiques of such a world view contend that symbol processing does not seem to be the right approach for representing experience and that operations on symbols can be specified using only their physical form, not their meaning -i.e., "[t]he symbolic level is not reducible to the physical level" [25].However, it is clear that the capability of the symbols to represent objects/ideas does not depend on how they are referenced in the world (e.g., the word 'tree' -has nothing to to with the way we identify and refer to it in the physical world).Thus, in such computational systems, words are regarded as ungrounded symbols, given the disconnect between them and the world in which they are used and experienced [26].Yet, we do not treat words as arbitrary symbols.Instead, we rely on the affordances of the word, in an embodied way -i.e., how people evaluate meaning and for what purpose in terms of their own body, actions, and goals.In this respect, language understanding models in NLP and AI are no less than disembodied computational accounts of meaning.
One domain where the ungrounded representation issue has become evident is the scientific study of experience, since "all distinctions we make in relation to the world around us are manifested primarily in our experience" [27].Yet, concepts like Self and Other, and the gap between them have been a major subject of inquiry in philosophy and cognitive science [28] throughout the years -as one cannot experience the world through other's body, their subjective context, and their point of view.As such, to compensate for this gap, the focus has shifted toward the complex construct of empathy, as an attempt to understand the other [29].
This disconnect has lead to what is known as the "Explanatory Gap" [30] between perceiving the human from without (i.e., through causal explanations, using third-person methods of behavioral and neuroscience approaches) and perceiving the human from within (i.e., how it feels, captured by the first-person nature of lived experience).Given that lived experience is subjective and directly accessible only from first-person position of the person who lives it, its scientific study has presented challenges that cannot be adequately (and fully) addressed with the objectivist view of scientific knowledge, understanding and validity of experience.One reason is that this view requires that the act of observing, describing, reflecting on, and remembering the (past) lived experience3 to actually 'represent' it in the present.Thus, according to Western science, since we cannot access our past experience with precision, studying experience cannot lead to scientific results that can be validated intersubjectively.
In early 1990's, non-cognitivist theories like enactivism have started to disrupt cognitive science, giving birth to novel ideas, including embodied cognition, the Bayesian brain, active inference and the free energy principle -having as main focus a generative perception wherein observers actively shape the worlds they perceive.The field has since gradually opened up to the study of lived experience after previously missing from its research agenda for many years.According to famous philosophers like Nagel [31] and Chalmers [32], while most studied cognitive science phenomena (like perception, memory, thinking, emotions) manifest in our lived experience, the field has failed so far to account for the "what it is like" -that is, the experiential or phenomenally conscious character of lived experience.Both Nagel and Chalmers address the subjectivity of conscious experience, in particular the so-called 'phenomenal consciousness' which focuses on properties like perceptions, sensations, feelings, emotions, thoughts, and wants which are different from any cognitive, intentional or functional properties of access consciousness [33].
All this time, Western science, including artificial intelligence, has favored the third-person perspective approach, thus neglecting the study of first person experience labeling it as unreliable and prone to bias.This direction has lead to a paucity of research on lived embodied experience and knowledge practices that allow us to get in contact with it.This tension between science and experience has deepened especially in artificial intelligence (and NLP, in particular) given their highly interdisciplinary nature -standing at the crossroads between many cognate fields that are indispensable in defining what is human and how we experience our inner and outer world.

The Science of Lived Experience
The investigation of experience from a first-person perspective can no doubt complement and enrich existing research from a third-person perspective.However, making the first-person investigation a research program is a relatively recent development.In the past few decades, different research fields have undergone changes, leading a new view of the scientific role of the observer and her embodied experience in the generation of knowledge.One such notable proposal is the enactive approach to cognition, which reached popularity with the famous Varela, Thompson, and Rosch's study of the Embodied Mind [25].Varela's claim was that only the development of a Science of Consciousness can handle the primacy and irreducible nature of lived experience necessary to any understanding of the mind.Thus, he called for a dialogue between mainstream third-person approaches and a disciplined, systematic and empirical exploration of experience from a first-person perspective, with its own specialized methods, procedures, and validation criteria.In fact, in his view of lived experience, first-person (i.e., phenomenological: lived through the body) and third-person (i.e., behavioral and physiological) methods, each represent one side that can and should inform and constrain investigations of the other side in terms of research questions, design, protocols of data acquisition, analysis, and validation of results.
According to enactivism, observer and world co-evolve through reciprocal interaction.Here, perception and action are not disconnected, but tightly linked as our perceptions guide our actions and our actions determine what we perceive.According to Varela, a disciplined firstperson account of experience cannot be acquired with theoretical, observational practices of psychological investigation through introspection, like simple psychological questionnaires and other limited techniques still very popular in cognitive science today.The reason is that they are inadequate to describe the 'what it is like' of experience as it is lived in first person, focusing instead on reproducing people's beliefs about their experience and the world along a number of pre-determined categories of inquiry.
I, too, believe that we need to open up the study of lived experience at the intersection of many research fields, promoting the development of special methods and procedures of firstperson perspective inquiry.

The Role of AI and NLP in the Systematic, Empirical Investigation of Lived Experience
Human communication is, in fact, multimodal and multisensory, integrating multiple sensory and effector modalities such as speech, gestures, body posture and movement, and even senses like touch, smell, and taste.Our natural ways of thinking, interacting with and experiencing the world, and the knowledge generated this way are all embodied.This idea is somewhat connected to Whorf's 1956 linguistic determinism which asserts that the particular language one speaks influences the way one thinks about reality [34].These views influenced research in linguistics, but also psychology, ethnology, anthropology, sociology, philosophy, as well as the natural sciences.Of course, the concept of embodied experience is more complex and sophisticated than Whorf's hypothesis suggests, but the main idea is that cognition goes beyond the simple manipulation of formal symbols, including sensing and interacting with the environment and one's own body.
As we move toward immersive experiences in augmented reality, virtual reality, mixed reality and want to build technologies that extend human capabilities, it becomes clear that we need to ground abstractions to familiar experiences so that they are meaningful.Concrete concepts are mainly grounded in sensory-motor information, whereas abstract word meanings are underpinned predominantly.For instance, if we take a word like 'pencil' and look it up in a dictionary, we get definitions like "a thin cylindrical pointed writing implement; a rod of marking substance encased in wood" [35].Yet, by grounding the concept, it might also bring back a blend of sensorimotor experiences of seeing, touching, holding, smelling such pencils.In this way, the concept pencil will be grounded for us in ways to encapsulate these lived experiences.So, thinking about and remembering concrete and even abstract concepts includes, rather automatically, the way we use them and experience them sensorially [26].

Appropriate Methodologies for the Collection of both first-and third-person data
In Chalmers' views, the purpose of a science of consciousness is to connect first-person data to third-person data [36] (investigating behaviour, brain processes and environmental interactions accessible through known scientific methods).First person data are about conscious experience like various sensory experience (e.g.vision, auditory, tactile, smell and taste), bodily experience (e.g.pain), mental imagery (e.g.recalled visual images), emotional experience (e.g.happiness, fear), and concurrent thought (e.g. the experience of deciding) [37].Chalmers has argued that reductive strategies to explain conscious experiences are doomed to fail [32,38].Even if science could provide a complete functional explanation, we are still left with the question of why is this functioning associated with the particular subjective experience.First-person data are not data about objective functioning, as "to live in accordance with scientific theory [..] undermines our genuine subjectivity" [39].
Scientists are thus facing the problem of finding good methodologies for collecting the data (both first-and third-person), expressing them in suitable language, and finding connecting principles.Thus, scientists have to develop methods in both domains.
The last few decades have seen a fascinating development of methods in psychology and especially in neuroscience (e.g.brain imaging, single cell studies), as well as impressive improvement in computer science and engineering with data-driven statistical models.Thus, while the science behind the third-person domain is well understood, unfortunately, this is not the case when it comes to lived experience, whose domain is missing well-developed methods for gathering and analyzing first-person data.Traditions that have largely disappeared from the scientific study of experience (i.e., introspection and philosophical phenomenology) are now making a comeback, offering new first-person approaches to the study of consciousness [40].In fact, many new first person methods today seem to merge ideas from introspectionism and phenomenology.Introspection, which was introduced by Wilhelm Wundt in the 19th century (see [41]), is a psychological method where "one attends carefully to one's own sensation and reports them as objectively as possible" [42] -thus, describing the felt sensation and not the stimulus that provoked it.On the other hand, for early phenomenologists like Brentano, the mental acts (i.e., judging, sensing, imagining, or hearing) reflect a sense of direction and purpose [43].Under such a framework, the phenomenal structure of experience involves intentionality, in addition to sensory qualities of the object of purpose.Thus, one cannot study phenomena like thoughts and judgement except by taking into account one's inner phenomenal experience [44].

Some Challenges of the Study of Experience
To describe an experience, one needs first to 'get in touch' or reconnect with it (or with some aspects of it), observe it (in retrospect), self-reflect, and/or communicate it to others.All these aspects bring up the issue of validity, which under an objectivist scientific approach, reduces to determining the extent to which we can compare them against the 'right' experience (i.e., the 'ground truth') 'as it was'.In this section, I will take a look at some important questions encountered in the scientific study of experience, identifying challenges centered around three main topics: (a) memory, (b) description and expression, and (c) intersubjectivity of lived experience [45].
Despite the major progress in AI in almost all its standard areas powered by large-scale data and high compute power resources, it still needs a significant amount of work in order to imitate, extend, simulate and augment many aspects of human embodied intelligence.Specifically, any investigation of lived experience requires a framework of both theoretical orientations, practices of knowledge generation, and empirical validation that account for this experiential dimension.In the subsections corresponding to each challenge, I will suggest some ways in which AI and NLP can help toward building such a framework of understanding.

Memory: Reconnecting and Recounting Experience
"This naked moment may well be apprehended with greater acuity in retrospect, but how can we know if what we view in hindsight will ever have truly been?" [46].This quote brings up one important fact about experience -that it is always lived in the retrospect.This aspect poses a significant challenge to any first-person methodology that seeks to capture all its dimensions.Even in the case of recent (i.e., just-elapsed) experience, there is always some delay, forcing us to observe, describe, and reflect upon it as it has already happened.Thus, any study of experience requires an understanding of the phenomenon of memory.For this, we need a set of conceptual and epistemological orientations accompanied by appropriate methodological approaches to lived experience.Such resources have the potential to lead to a better understanding of the process of relating to, connecting with, and recalling past experience.In addition, such an understanding is directly coupled with first-person data collection, the language we use in recalling the past, and the methods of validity of first-person research.
One major scientific, observer-independent challenge of the memory aspect of experience is that it cannot be accessed with precision -meaning, we do not have a 'ground truth' of the exact experience event 'as it happened' (i.e., due to inherent "temporal" and "interpretative" distortions [47] involved in the process).In fact, memory researchers have expressed concerns about human confabulation, self-deception, and tendency toward "false memories" [48,49].In response, some researchers have started developing specific methodological tools to mitigate such inherent memory challenges.For instance, tools like DES (Descriptive Experience Sampling) [50] and MMP (Micro-phenomenology) [51] are aimed at minimizing memory distortions (e.g., the temporal gap between the past experience and its examination in the present) and decreasing the possibility of false memories [52], as well as determining the extent to which the person remembering and reliving the past experience is "in contact" with it in the present [47].Of course, retrospective evaluations can lead to memory bias, so such research should be done with caution.For instance, some studies [53] have found discrepancies between individuals' recall of affective experiences and their momentary report during that time.Additionally, different people show different abilities to describe their experiences, and this may be related to different abilities that lead to specific experiences (i.e., reacting to different memory triggers).
Contrary to the objectivist approach, more recent (enactive) accounts seem to converge on the ideas that the only valid starting point for examining the process and epistemology of relating to, remembering, and even 'reliving' the past experience is that only from present experiencing can we relate to past experience [47].This is only possible from what makes sense to us of that particular experience as we investigate it in the present moment.However, this recognition has so far not been fully validated theoretically and empirically.Although psychology and cognitive science have long recognized the dynamic and constructive nature of memory [54], mainstream cognitive science accounts of memory in the scientific community still largely regard it as storage space, and remembering as a retrieval mechanism that should "correctly" recover the content stored in the past as it was.
It is clear that a more complete theoretical and empirical understanding of memory is desperately needed in the field of first-person studies if we want to better understand (past) experience.Specifically, I argue that, the few promising contemporary enactive theoretical accounts in cognitive science (see, for instance, [55,56,57]), must be validated by rigorous empirical, data-driven investigations in artificial intelligence and natural language processing.With few exceptions coming from computer vision and human-computer interaction (like [58,59]), AI (and especially NLP) have not even scratched the surface when it comes to first-person research and lived experience.The AI community should and can bring its own contribution to the science of experience through new practices, tools, computational platforms, and methods of validity which address the specific peculiarities of experience.

Expressing and Describing Experience through Language
Once the past experience is retrieved and potentially re-lived to a significant extent, the next step is to express and communicate it to others.This way, any scientific first-person research can undergo intersubjective validation.Although expressing or communicating an experience can be done in a number of modalities, most of the time we communicate our experiences via language.In fact, some researchers believe that first-person research require "a linguistic description of the content intended by the reflecting act" [60].
Under the mainstream scientific approach, verbal descriptions of evoked experiences are measured against accurate descriptions of how they happened in the past (i.e., an ideal 'ground truth').Even assuming people are able to reconnect to and recall the past experience as accurately as possible, their ability to describe and share it can vary considerably.This is not only due to social, cultural, and educational differences in vocabulary use, but also to our capacity for self-reflection, to mention a few factors.In fact, a large portion of our experience remains unnoticed, hidden, and pre-reflective [61] due to our tendency to focus on the 'what' of our experience, instead of how it unfolds and feels to us.Thus, like an iceberg immersed in water, most of it is not readily and even easily accessible to awareness, let alone to verbal description and communication.This not only complicates the process of comparing the inherent linguistic variations across people and experiences, but also the validation process of describing experience.
Nevertheless, this does not mean that one cannot have any access to past experience from the present moment, even partially.We simply need a set of resources and tools that help us become aware of such hidden lived aspects of the lived experience, and then describe it in detail and more accurately.There have been some attempts in this direction in cognitive science [51,50], but these methods have been tested in a rather solipsistic framework and have not yet been empirically validated on a large scale.
I argue that advances in NLP, deep learning, human-computer interaction, and other areas of AI should be seriously considered not only to test such methods, but to improve them with large-scale data-driven approaches.

Validating the Scientific Inquiry into Lived Experience
Research on experience, as any kind of research, requires intersubjective validation.Therefore, although one can investigate their own experience without the requirement to reflect on and communicate the outcome of that inquiry, in scientific research, there can be no inquiry without description or communication.Given the subjective and rather personal dimension of experience, it cannot be easily corroborated, verified, and even less replicated by mainstream scientific practices.Nevertheless, many believe [62,47] it is possible to communicate experience and make it accessible to others through shareable descriptions and expressions of experience.These translate into specialized resources and tools that can assist along all stages of the introspection from reconnecting to past experience, focusing attention to various aspects of it, describe it (through language or other means), acquire first-person data about it, all the way to building dedicated tools and resources to interpret and validate it.
Given the special qualitative character of experience which is always subjective and from firstperson perspective, Nagel and Chalmers have argued that strategies for explaining experiences must be different from mainstream objectivist approaches.Previous attempts in psychology to explain experience scientifically (e.g., introspectionism, philosophical phenomenology) reached a dead end given their results could not be intersubjectively verifiable, bringing doubts about such methods.
However, the state-of-the-art in cognitive science research on this aspect has not passed the brainstorming phase.I, again, argue here that, in our quest for a methodological and intersubjective framework of lived experience, we need a cross-method approach coming from a wider set of cognate fields.Artificial intelligence, and natural language processing in particular, can and should offer a suite of resources and tools for the methodological and especially empirical exploration and validation of subjective experience.Specifically, we need tools and resources to widen our sensory abilities and to develop our own sensory capacity for fine-grained descriptions of our inner emotional processes.

Conclusion
True human-centered artificial intelligence is impossible without addressing the inherent and highly diverse aspects of humanness.Deep learning models have achieved remarkable success in some tasks in vision and language processing, and few can deny it.However, as it moves forward, the field cannot continue to pretend it can do it all by itself, especially when we advertise it as 'human-centered AI'.It has come the time to open up the stage for methodological pluralism in the interest of critical and democratic science, and for the benefit of society.Specifically, AI and NLP need a serious reassessment of existing theories, methodologies, and foundational principles, along with the so much taken-for-granted scientific assumptions.The community also needs to facilitate the exploration of new methodologies.This way language processing can take its rightful place toward a diversity of approaches in NLP/AI.
In this paper I have called for a broader dialogue between AI and NLP, on one side, and humanities and social sciences on the other side when it comes to the future of true humancentered technologies.Specifically, I addressed the versatile topic of lived experience which needs a highly-interdisciplinary approach for a more accurate, robust, and ultimately, more meaningful understanding of the practices of being human.
Such diversity and methodological pluralism will lead to the creativity and innovation so much needed to restore the harmony of the AI ecosystem.