THE ILLUSION AS METHOD AND IDEOLOGY: COMPUTATIONAL SOCIOLOGY, VISION AND SIMULATION
Liviu Poenaru, Mar. 25, 2025
​
​
Abstract
In an era defined by algorithmic governance, digital spectacle, and perceptual overload, classical sociological methods prove insufficient to grasp the complexity of contemporary social life. This work explores the convergence of computational sociology, social simulation, and vision as a new epistemological paradigm—one in which perception, simulation, and emergent behavior are not only studied but mobilized as tools to model and shape the social world. Drawing from agent-based modeling, computational vision, and critical theory, the study demonstrates how simulation is no longer a condition of postmodern illusion but a methodological engine for generating and governing social realities. Through case studies such as Cambridge Analytica and phenomena like astroturfing, nudging, and algorithmic vision, we show how interpretive frames and visual regimes are embedded into simulations to test and steer behavior, often reinforcing symbolic hierarchies and political agendas. By modeling how agents perceive and act upon visual and affective stimuli, we uncover how perception itself becomes a site of power. The implications for mental health, democratic coherence, and social inequality are profound. This fusion of disciplines invites a critical rethinking of sociology—not as a mirror of reality, but as a laboratory for anticipating systemic collapse and designing new perceptual and political infrastructures.
​
Introduction
In the 21st century, the sheer complexity of social life demands new conceptual tools. Descriptive classical sociological methods—qualitative ethnography, surveys, statistical models—are insufficient when attempting to analyze the increasingly non-linear, multi-agent, and algorithmically modulated realities of contemporary society. This inadequacy has opened the door to computational sociology and agent-based modelling (Jackson et al., 2017; Bao et al., 2024). Simultaneously, visual studies—long marginalized in the social sciences—have undergone a revival, particularly in dialogue with media theory, cognitive neuroscience, and critical theory.
We are thus entering a new era in which sociology no longer confines itself to the descriptive or diagnostic function of mapping social structures; instead, it assumes a generative role, actively constructing models, scenarios, and digital environments that give rise to new forms of sociality. Through computational simulations, algorithmic modeling, and immersive data-driven frameworks, sociology becomes a productive force—one that does not simply observe but actualizes society in silico. This shift marks a profound epistemological rupture: social theory is no longer external to its object but embedded within it, capable of testing hypotheses through artificial societies, generating new agents and relations, and simulating the dynamic interplay of perception, affect, and structure. The discipline becomes both laboratory and laboratory technician, crafting speculative futures and illuminating the latent mechanics of the present. In this sense, sociology does not merely interpret the world; it becomes part of the code that constructs it.
​
The convergence of computational sociology, social simulation, and vision offers a new epistemological paradigm: one in which perception, simulation, and emergent behavior are not merely objects of study, but tools for reconfiguring the very idea of the "social." This article aims to explore this triadic intersection and suggest how it reshapes our understanding of social construction, power, and meaning.
​
Computational sociology and social simulation
Computational sociology is not simply the digitization of traditional sociology. It represents a shift in ontology and methodology: from linear causality to complex systems, from fixed variables to dynamic agents, from static data to processual simulations (Miller & Page, 2007; Edelmann et al., 2020). Rather than merely modeling correlations, computational sociology enables us to construct artificial societies that evolve over time based on programmed rules of behavior and interaction.
​
It allows the researcher to encode sociological theories into simulated environments, test hypotheses dynamically, and uncover patterns of emergent behavior. This is particularly powerful for studying large-scale social phenomena such as segregation, polarization, norm formation, or network diffusion. In short, computational sociology converts society into a generative grammar—a set of rules capable of producing complex social realities.
​
The Handbook of Computational Sociology (Abbington, 2019) offers an extensive overview of modeling techniques such as agent-based simulations and social network analysis, allowing researchers to recreate complex social dynamics and test how populations might respond to various stimuli. These simulations are not limited to academic purposes—they can also be co-opted by institutions or platforms aiming to steer public opinion, shape consumer behavior, or manage social risks in advance. The ability to simulate collective behavior turns into a form of preemptive control, where the management of populations is guided not by laws or discourse, but by predictive computation.
​
Social simulation (Gilbert & Troitzsch, 2005; Sun, 2005), often operationalized through agent-based modeling (ABM), is the methodological vehicle of computational sociology (Sun, 2006). It conceptualizes individuals as agents embedded in environments who act according to specific rules, which may include learning, imitation, competition, and cooperation. Crucially, these agents perceive their world, make decisions, and adapt.
This "perception" is not metaphorical. Increasingly, simulations are incorporating cognitive architectures that mimic real perceptual systems. Herein lies the bridge to vision: perception becomes a computational feature rather than a subjective given.
​
Social simulation introduces a radical proposition: society can be built and observed in silico. But this digital society is not a mirror—it is a lens, a tool for exploring how local perceptions aggregate into global patterns. It is also a means of interrogating how visual inputs—icons, signs, images—shape and are shaped by collective behavior.
​
Social simulation builds on this foundation by offering the technical mechanisms through which these abstract models are instantiated and made dynamic. Rather than remaining static representations, simulations allow for the unfolding of social processes over time—agents interacting, norms evolving, structures emerging. In this context, simulation becomes a mode of experimental sociology, where one can observe the unintended consequences of rule changes or test the thresholds of systemic collapse. It is a laboratory for complex causality.
The infamous case of Cambridge Analytica offers a concrete and troubling illustration of how social simulation techniques, rooted in computational sociology, were operationalized to predict and influence political outcomes—most notably, Donald Trump’s election in 2016. By harvesting vast quantities of personal data from Facebook profiles, the firm constructed psychographic models of millions of individuals, effectively simulating social behavior at scale (Hu, 2020). These models went beyond mere demographic segmentation; they attempted to map personality traits, emotional vulnerabilities, and ideological leanings. This granular data was then used to simulate how different psychological profiles would respond to specific visual and textual stimuli—particularly political ads engineered to provoke fear, outrage, or affirmation. In doing so, Cambridge Analytica did not merely analyze an existing social landscape; it created algorithmic representations of the electorate, tested how norms might shift under targeted influence, and deployed tailored content in a feedback loop that blurred the line between prediction and manipulation. It was, in essence, an experiment in mass behavioral modification—leveraging the very dynamics of social simulation to anticipate reactions, craft affective contagion, and reshape public opinion in real time.
​
We can easily imagine this model becoming banalized—normalized and systematically deployed—across political and economic domains, where behavioral prediction and micro-targeting are no longer exceptional tactics but routine instruments of governance and market control. As these techniques become more refined and accessible, the capacity to simulate and influence public sentiment, consumer behavior, or electoral outcomes may no longer require exceptional technological power, but merely access to data and algorithms. What was once controversial becomes infrastructural, embedding manipulation into the fabric of everyday decision-making under the guise of optimization.
​
Visualized social simulations
The third pillar of our triad is vision—understood not merely as optical input, but as a socialized perception. Vision is never neutral; it is embedded in systems of meaning, shaped by historical technologies (from painting to photography to machine learning), and mediated by structures of power.
​
The human visual system, engaging nearly half of the cerebral cortex and involving over 30 distinct brain areas, is a testament to its evolutionary importance and its profound influence on cognition, behavior, and culture. Vision extends beyond the occipital lobe to involve parietal, temporal, and frontal regions, transforming raw sensory input into complex perceptual experiences that guide navigation, memory, and decision-making (Felleman & Van Essen, 1991; Breedlove, Rosenzweig, & Watson, 2012). This distributed, hierarchical architecture makes vision the dominant human sense, uniquely capable of converting environmental stimuli into actionable and socially meaningful insights. For these reasons, it is not surprising that vision is the most exploited sense by capitalism, and more recently by cyber-capitalism.
​
The concept of scopic regimes (Jay, 1988) points to the culturally specific ways in which societies organize what is visible and invisible. These regimes are now being encoded into algorithms, automated surveillance systems, and generative AI—raising the question: how do computational systems see, and what are the consequences of that simulated vision?
​
When visual elements are integrated into social simulations—avatars, digital bodies, gaze patterns, facial recognition systems—we begin to approach a deeper realism in artificial societies. "Deeper realism in artificial societies" denotes the heightened fidelity and complexity with which simulated environments mirror the structural dynamics, behavioral patterns, and lived experiences characteristic of real-world social life. This "realism" is not only about visual accuracy or aesthetic detail, but rather about the psychosocial plausibility and symbolic coherence of interactions, bodies, identities, and power dynamics within the simulation. But we also enter a new era of ethical concern: simulations no longer only mimic social behavior; they also replicate visual ideologies (Crawford & Paglen, 2021).
​
What happens when we synthesize these three elements—computational sociology, social simulation, and vision—is the emergence of a powerful new epistemological and practical terrain for rethinking society. This synthesis does not simply add tools to the sociological toolbox; it reconfigures the very way we construct, perceive, and intervene in social reality.
​
But it is the integration of vision that introduces a critical and deeply human dimension. By embedding perceptual layers into these models, we simulate not only what agents do or how they relate, but how they see—how they perceive others, how they are themselves perceived, and how images, symbols, and signs circulate among them to produce social meaning. This visual layer is not merely decorative; it is constitutive of the symbolic order. It allows us to explore how beauty standards, racialized gazes, surveillance optics, dominant politics or aspirational aesthetics shape agency and affect.
​
Together, these three elements shape a multidimensional and generative approach to the social—a mode of inquiry where theory, technology, and perception converge in a powerful and transformative synthesis. The result is not only a more intricate understanding of the social fabric but a platform for designing interventions, imagining alternatives, and exposing the implicit codes that govern our mediated lives. This synthesis pushes sociology beyond diagnosis into the terrain of creation—a sociology that does not merely describe the world but actively reshapes its symbolic and structural conditions.
​
This fusion of computational sociology (Abbington, 2019), social simulation, and computational vision (Marr, 2010; Frisby & Stone, 2010; Zhao, 2017; Tsotsos, 2021; Andler, 2023; Goar et al., 2024) opens a radically expanded field of inquiry—one that allows researchers to investigate not only the behaviors of social agents, but the interpretive frames through which those agents perceive and respond to the world. It is no longer sufficient to model what agents do; we must also model how they see, what they recognize as meaningful, and how their perceptions are shaped by cultural codes and visual cues. In this context, interpretive framesrefer to the cognitive and cultural lenses through which social agents make sense of their environment. These frames shape how individuals perceive, categorize, and respond to the world around them, including other people, symbols, and situations. They are structured by prior experiences, social norms, values, and visual codes—essentially functioning as filters that determine what is seen as relevant, threatening, desirable, or familiar. Modeling interpretive frames means accounting not just for agents' actions, but for the underlying structures of meaning and perception that guide those actions.
​
Taken together, these approaches reveal a disturbing yet increasingly normalized logic: simulation becomes prediction, and prediction becomes governance. Visual systems designed to guide attention are coupled with sociological models that forecast behavior, resulting in a techno-scientific regime where populations are not simply represented, but actively steered.
​
Within this framework, we can begin to explore pressing questions about identity, power, and representation. For example, how do simulated individuals react to visual markers associated with race, gender, or class? These are not neutral attributes—they are laden with cultural meaning and sociohistorical baggage. By embedding such markers into the simulation, researchers can observe how bias, stigma, or privilege emerge, circulate, and affect decision-making. Vision, in this context, becomes an interface between the social and the symbolic, where structural inequalities are not only enacted but seen.
​
Moreover, within this same framework, we must also consider how governments and institutions increasingly employ nudges—subtle behavioral cues embedded in digital and visual environments—to steer individual and collective decisions without overt coercion (Adkisson, 2008; Whitehead et al., 2014). When integrated into simulations, these nudges can be modeled as visual or affective stimuli that exploit interpretive frames to influence behavior in politically or economically desirable directions. Altering the prominence of certain images, changing color schemes, or using emotionally charged content can guide responses to issues like immigration, public health, or voting. These techniques are not neutral; they intersect with markers of race, gender, or class, often reinforcing existing power structures under the pretense of “choice architecture.” Simulating such dynamics allows researchers to critically examine how policy-driven manipulation operates through visual regimes—co-opting perception itself as a tool of governance.
​
Astroturfing, when viewed through this integrated framework of computational sociology, social simulation, and vision, reveals itself as a potent mechanism for manufacturing perception and simulating consensus. The term “astroturfing” was first coined to describe the practice of creating fake grassroots campaigns, in which individuals or organizations disguise themselves as grassroots organizations in order to influence public opinion. The term is used to highlight the fact that these campaigns are not genuine grassroots movements but are actually sponsored or orchestrated by third parties with ulterior motives (Keller, 2018). By creating the illusion of grassroots support—through bots, fake accounts, coordinated visuals, or artificially inflated comment sections—astroturfing manipulates the interpretive frames of both individuals and digital crowds.
​
Within simulations, such tactics can be modeled to show how false signals of popularity or legitimacy can influence agent behavior, amplify conformity, and suppress dissent. The visual markers of astroturfing—likes, shares, emotionally charged imagery, user-generated content—become tools of symbolic engineering that reshape what agents see as authentic, credible, or socially validated. Governments and private actors alike can use these techniques not merely to inform, but to perform public opinion, blurring the boundary between social reality and orchestrated spectacle. In this way, astroturfing exemplifies how vision, affect, and structure converge to govern perception itself—embedding manipulation within the visible fabric of the social world.
​
The dynamics of digital crowds can thus be analyzed through their collective visual responsiveness. How do masses of agents respond to visual stimuli such as propaganda, scenes of violence, or rapidly changing trends in fashion? These additional phenomena are deeply affective and inherently visual. A crowd is not simply a collection of individuals reacting to information—it is a field of shared perception, subject to contagion, mimicry, and spectacle. Simulating these responses allows us to interrogate how visual culture modulates collective behavior, often in irrational or emotionally charged ways.
​
The role of algorithmic vision—such as facial recognition, emotion detection, or sentiment analysis—can be folded into these simulations to examine how machine-mediated perception intervenes in social life. When agents are not only watched but interpreted by non-human observers, new hierarchies and feedback loops emerge. Who is flagged as suspicious? Who is categorized as trustworthy, attractive, or dangerous? Algorithmic vision, once again, does not merely reflect society; it constructs it through encoded biases and computational judgments that reshape how individuals interact, navigate space, and interpret each other.
​
In synthesizing these layers, we move toward a profoundly reflexive and politically charged sociology—one capable of interrogating the recursive entanglements between perception, representation, and social structure in the age of digital vision. The simulation becomes not just a methodological tool but a critical lens, allowing us to visualize the invisible forces that govern how we see and are seen.
​
Implications for mental health
This intersection is not merely an academic curiosity—it marks a critical juncture with profound implications for contemporary thought, particularly in the realms of critical theory, ethics, and public policy. As the boundaries between digital environments, algorithmic perception, and sociological inquiry dissolve, we are compelled to rethink not only the tools we use but the very frameworks through which we perceive and produce social reality.
Such research could radically extend our understanding of phenomena like social identity formation, prejudice, urban planning, crowd dynamics, manipulation, collective trauma, and global mental health issues—particularly in a time marked by escalating systemic instability and democratic erosion. By integrating computational simulations with perceptual modeling, we gain powerful tools to analyze how visual environments, algorithmic interventions, and socio-symbolic infrastructures interact to produce stress, fragmentation, and psychic overload. Mental health, in this context, is no longer reducible to individual pathology; it becomes a barometer of collective exposure to informational saturation, algorithmic manipulation, social engineering, and the erosion of shared symbolic anchors.
​
As algorithmically curated content increasingly floods daily life with images of violence, crisis, distortions, and unattainable ideals, individuals are caught in a loop of hyper-visibility and affective exhaustion, fostering anxiety and dissociation on a mass scale. These pathologies cannot be fully understood without considering how scopic regimes—what we are made to see, how we are seen, and what remains invisible—shape psychic life. Moreover, simulations can help map how exposure to trauma, misinformation, and economic precarity circulates unevenly across populations, deepening social inequalities and reinforcing conditions of structural despair.
At the geopolitical level, such research sheds light on how digital platforms and visual propaganda destabilize democracies by exploiting cognitive vulnerabilities, simulating consensus through astroturfing, and engineering polarization through micro-targeted fear stimuli. The fusion of vision, algorithm, and simulation enables authoritarian regimes and populist movements alike to manipulate affective landscapes at scale, bypassing rational deliberation and short-circuiting democratic reflexivity. In this light, the crisis of democracy is inseparable from the collapse of perceptual and symbolic coherence.
​
Thus, the fusion of computational sociology, social simulation, and vision not only advances scientific understanding—it becomes a critical tool for diagnosing the psychosocial consequences of global chaos and imagining alternative political and perceptual infrastructures. It invites a new form of sociological intervention: one that can anticipate systemic collapse, simulate resilience, and design environments that nurture mental health and democratic vitality alike.
​
The illusion as method
Are we living in a world founded on the method of illusions as the primary tool of cybercapitalism? This question touches the very heart of our contemporary condition. Cybercapitalism does not merely accommodate illusion; it institutionalizes it, rendering it a fundamental operational strategy. Illusions today are not decorative nor marginal—they are meticulously engineered, algorithmically curated, and structurally embedded into the infrastructures of digital life. What once appeared as ephemeral spectacle now functions as permanent architecture. From the algorithmic personalization of newsfeeds to the synthetic manufacturing of public consensus through astroturfing, from influencer economies to immersive consumer environments, cybercapitalism fabricates perceptual worlds that simulate authenticity, agency, and social belonging—all while guiding behavior in alignment with profit-driven and political interests.
​
In this context, illusion becomes methodical. It is no longer a distortion of reality but a strategic mechanism for producing governable, predictable, and monetizable subjects. Through visual saturation, behavioral nudges, predictive analytics, and affective modulation, our lifeworlds are sculpted in advance of our choices. Perception is programmed, temporality is compressed, and symbolic meaning is subordinated to engagement metrics. We are not simply misled—we are structurally habituated into a world where our subjectivities are modulated under the illusion of autonomy. In this sense, yes: the method of illusions has become the dominant apparatus of cybercapitalism—an apparatus that is infrastructural, affective, and epistemological at once.
​
Jean Baudrillard’s theory of simulacra (Baudrillard, 1994)—images unmoored from referents, copies without originals—remains disturbingly prescient. His vision of a hyperreal society, in which signs circulate independently of any grounding in material or symbolic reality, anticipated the regime of spectacle and symbolic exhaustion that defines our digital era. But the current situation requires a nuanced revision:simulation is no longer merely a condition of postmodern illusion; it has been reprogrammed as a method. In computational sociology, behavioral modeling, and algorithmic governmentality, simulation is now a powerful tool for generating, testing, and governing social realities—not merely representing them.
​
This methodological rehabilitation of simulation does not invalidate Baudrillard’s critique; it radicalizes it. What we confront today is not the disappearance of illusion, but its instrumentalization. Cybercapitalism transforms illusion into productivity: synthetic publics, algorithmically amplified emotions, fabricated social proof, artificial goals, and predictive advertising do not distort reality—they construct it. These illusions are actionable, capable of engineering outcomes, altering elections, triggering mass consumption, or producing conformity. They operate through the logic of functional legitimacy: if it generates measurable effects, it is accepted as true. This technocratic pragmatism—the idea that utility equals truth—is the philosophical foundation of our engineered hyperreality.
​
We are therefore not beyond illusion, but enveloped within a modernity where illusion is no longer an accidental byproduct of ideology, but its primary technique. Cybercapitalism thrives on this paradox: it uses simulation as a rational tool for modeling society, while simultaneously eroding the symbolic anchors that tether social life to meaning, memory, or collective truth. The result is a reality that is programmable, scalable, and profitable, but also epistemologically unstable and ethically evacuated. What Baudrillard feared—the collapse of the real into simulation—has become not just a cultural diagnosis, but a governance model.
​
Hence, modernity under cybercapitalism is grounded in illusion—not as fiction, but as design. The simulation does not conceal the real; it replaces it with a coded, quantifiable, and manipulable stand-in. The most urgent task of critical theory today is thus not merely to expose illusions, but to rigorously distinguish between simulation as method and simulation as ideology. The former holds emancipatory potential if used reflexively and critically; the latter risks total complicity with systems of domination. The imperative is to reclaim simulation as a space of experimentation and critique, rather than surrender it to the market logics and cognitive regimes that increasingly dictate our vision of the world.
​
Bibliography
Abbington, B. (Ed.). (2019). Handbook of computational sociology. Delve Publishing.
Adkisson, R. V. (2008). Nudge: Improving Decisions About Health, Wealth and Happiness. The Social Science Journal, 45(4), 700–701.
Andler, D. (2023). Intelligence artificielle, intelligence humaine : la double énigme. Paris: Gallimard.
Bao, H., Zhang, J., Cao, M., & Evans, J. A. (2024). From division to unity: A large-scale study on the emergence of computational social science, 1990–2021. arXiv.
Baudrillard, J. (1994). Simulacra and simulation (S. F. Glaser, Trans.). University of Michigan Press. (Original work published 1981)
Breedlove, S. M., Rosenzweig, M. R., & Watson, N. V. (2012). Biological psychology: An introduction to behavioral, cognitive, and clinical neuroscience (7th ed.). Sinauer Associates.
Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & Society, 36(4), 1105–1116.
Edelmann, A., Wolff, T., Montagne, D., & Bail, C. A. (2020). Computational social science and sociology. Annual Review of Sociology, 46(1), 61–81.
Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1(1), 1–47.
Frisby, J. P., & Stone, J. V. (2010). Seeing: The computational approach to biological vision (2nd ed.). MIT Press.
Gilbert, N., & Troitzsch, K. G. (2005). Simulation for the Social Scientist (2nd ed.). Open University Press.
Goar, V., Sharma, A., Shin, J., & Mridha, M. F. (Eds.). (2024). Deep learning and visual artificial intelligence: Proceedings of ICDLAI 2024. Springer.
Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2).
Jackson, J. C., Rand, D., Lewis, K., Norton, M. I., & Gray, K. (2017). Agent-Based Modeling: A Guide for Social Psychologists. Social Psychological and Personality Science, 8(4), 387-395.
Jay, M. (1988). Scopic regimes of modernity. In H. Foster (Ed.), Vision and Visuality (pp. 3–23). Bay Press.
Keller, T. R., & Klinger, U. (2018). Social Bots in Election Campaigns: Theoretical, Empirical, and Methodological Implications. Political Communication, 36(1), 171–189.
Miller, J. H., & Page, S. E. (2007). Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press.
Sun, R. (2005). Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation. Cambridge University Press.
Tsotsos, J. K. (2021). A computational perspective on visual attention. MIT Press.
Whitehead, M., Jones, R., Howell, R., Lilley, R., & Pykett, J. (2014). Nudging all over the world: Assessing the Impacts of the Behavioural Sciences on Public Policy. ESRC Negotiating Neuroliberalism Project Report. http://changingbehaviours.wordpress.com
Zhao, Q. (Ed.). (2017). Computational and cognitive neuroscience of vision (1st ed.). Springer.
​
​