Over the past 7 years, my work and creative endeavors have explored the convergence of cognitive science, adaptive technology, and behavioral data-driven innovation. This exploration has illuminated an intriguing modern phenomenon: technology has evolved beyond mere tools or aids—it is becoming an intrinsic extension of our cognitive processes. We have entered an era where constant digital stimulation, perpetual multitasking, and reliance on technological assistance have redefined what it means to think, learn, feel, and function efficiently.

Neurosculptor sits perfectly at this intersection as an interactive neuro-symbolic cognitive installation that reveals how everyday digital behaviors affect our brain’s neuroelectronic and neurochemical state. By combining neuroscience, interactive technology, and behavioral psychology, it transforms intangible and complex cognitive and emotional processes into a visual, resonant, and interactive experience—making the invisible impact of modern life on cognition both felt and understood.

The greatest challenge in this project was translating the intricate architecture of real-world neuroscience into a symbolic framework that could be both artistically rendered and psychobiologically coherent. To address this, I developed a JSON-based symbolic mapping system that functioned as the behavioral “brain” of the sculpture. This system was informed by contemporary neurocognitive architectures such as ACT-R, enabling dynamic, structured representations of cognitive processes—including attention shifts, memory retrieval, and decision-making pathways—within the sculptural form.

In Spring 2024, during my master study and HCI research at NYU, I had the honor to receive permission to join Professor David Poeppel's Ph.D. panel, "The Origins of Cognitive Neuroscience," at the Neuroscience Department. This experience significantly shaped the trajectory of my project.

Coming from a background in cognitive psychology with a longstanding fascination for psychobiological mechanisms and neuronal representations, this experience challenged and expanded my understanding of cognitive representation deeply. For my final paper in this panel, titled "The Evolution of Cognitive Representation: The Visualization of the Mind," I explored how the mind’s abstract processes can be tangibly represented. This academic journey not only profoundly influenced my conceptual approach but also provided essential insights and scientific grounding, ultimately inspiring the development of this interactive neuro-symbolic cognitive experience.

My initial idea for this project was called Eudaimonia, inspired by Dr. Cal Newport’s book Deep Work. The term, from Ancient Greek, means “good spirit” or “flourishing”—representing “a state in which you’re achieving your full human potential.” I wanted to explore how overstimulation from digital tools, information overload, and chronic multitasking have impaired our ability to enter states of deep focus, and how we might reclaim it.

I conducted qualitative research and interviews by using a story-based prototype to initiate the conversations and verify my behavioral hypothesis:

25 student interviews (undergraduate, graduate, PhD) and 4 professors from 3 NYU departments over 2 months.

Scientific literature reviews on topics such as neuroplasticity, DNM, reward circuitry, and executive control.

Public datasets like Digital Media Exposure and Cognitive Functioning, which reveal trends in attentional drift, memory decline, and task-switching impairments in digitally saturated environments.

The insight that I found during this process has shifted my direction: people were less interested in how an app might help them focus, and more intrigued by why deep work matters—and how their habits change their brain. This led to the conception of an educational-artistic installation rooted in psychobiological modeling and symbolic AI.

Check all the references (books and papers) here

Observable OUTCOME

Average Duration of Interaction

> 15 mins

Repeat Interactions

> 85%

Behavioral Observations

Curiosity & Intrigue
Quiet Focus
Deep Resonance
Surprise&Realization

Qualitative Feedback

> 98%
Increased Awareness

Model Architecture

Neurosculptor is a multi-layered interactive experience that uses a symbolic behavior-to-brain-state model to translate digital habits into changes in:

Neurotransmitter activity (e.g., dopamine, cortisol, acetylcholine)

• Brain region activation: 20 Brain Regions (e.g., ACC, dlPFC, VTA, Insula)

• Cognitive/emotional network dynamics: 8 Neural Pathways (e.g., DMN, SN, TPN)

By simulating these processes in a visual and sensory way using light, motion, and feedback loops, it creates a bridge between abstract neuroscience and lived behavior.

The form factor—a digital+hand-sculpting sculpture that glows, pulses, and morphs—was inspired by the fluidity of cognition. LED patterns are not merely decorative; each pulse represents a neurochemical rhythm or cognitive signal (e.g., fast dopamine spikes from social media scrolling, GABA’s calming glow from mindful breathing).

The sculpture integrates sensors and Raspberry Pi-based hardware to translate user interactions (e.g., button presses or page selections) into symbolic neural activations. LED behavior is mapped to specific brain functions using a cognitive pathway matrix.

• Backend Engine: Python-based symbolic behavior-to-neurochemical mapping using a nested JSON structure

Neural Pathway Logic: Includes 12+ behavioral scenarios like multitasking, deep work, meditation, and quick learning through AI vs. traditional learning, falling in love, dreaming in your sleep, etc.

• Each event(12 in Total) in the model drives neurotransmitter shifts and animation sequences

Some Code Snippets

To facilitate an intuitive and emotionally engaging user experience:

Interface Logic: Interaction is structured around a storyline progression. Users select behavioral scenarios from a simple touchscreen menu.

Feedback Design: Visual feedback is immediate and meaningful—light animations pulse in synchrony with simulated neurotransmitter activity.

Emotional Mapping: Each animation is mapped to emotional states (e.g., erratic LED blinking for anxiety, soft breathing glow for calm).

• Accessibility: Visual indicators are color-coded but also rely on motion patterns so users with color vision deficiencies can still interpret states.

‍• Journey Flow: Users begin with curiosity, experience cognitive-emotional feedback, and leave with a personalized reflection prompt.

Unlock the Mind — Interaction Flow

NeuroGlow Lab — Interaction Flow

- 3D printed cerebral model 
- Hand-sculpted body shell
- Integrated LED diffusers and wire channels
- Modular mounting for Raspberry Pi and LED matrix
- Materials: clear resin, clay, acrylic paint, clear coat, Raspberry pi 5, 350+ optic fibers, uv resin, 20 LEDs, 7-inch monitor, 5v power cable.

Final Outcome & Future Plan & Special Thanks

Observable Outcomes

• Average Duration of Interaction: > 15 minutes
‍• Repeat Interactions: > 85%
‍• Behavioral Observations:Curiosity & Intrigue, Quiet Focus, Deep Resonance, Surprise & Self-Realization
‍• Qualitative Feedback: > 98% of users reported increased awareness of their own digital habits and cognitive states
‍• Audience engagement: Viewers showed spontaneous emotional responses when their habits were “reflected” back to them (e.g., endless scrolling triggering bright, erratic LEDs)
‍• Educational impact: Many expressed that they now have a better understanding of how their daily digital activities triggered their brain rewiring process after interacting with the piece
‍• Emotional resonance: By embedding psychological symbolism in the visual animations, users related more deeply to their internal states.

Future Plan

In the next iteration, I aim to integrate an eye-tracking camera module into the sculpture to detect pupil dilation and eye movement, offering deeper insight into users’ attentional and emotional states. This would allow the system to respond dynamically to unconscious physiological signals—enhancing the bio-symbolic loop and enabling richer cognitive feedback.

While the initial version relies on symbolic simulation, this enhancement would bridge into real-time cognitive sensing, bringing the installation closer to a full neuroadaptive system.

Through this process, I’ve learned the profound importance of combining art, science, and interaction design. It not only deepened my appreciation for neuroscience but also reshaped how I view the role of technology—not just as a tool, but as a reflective mirror of our inner lives.

Special Thanks

I would like to extend my deepest gratitude to Professor David Poeppel, whose intellectual courage inspired me to embark on this complex project. His ongoing stimulation of my thinking within the realms of cognitive science and human-computer interaction has been a guiding force throughout this journey.

To Professor
Danny Rozin, thank you for teaching me everything I know about physical computing and digital fabriction. Your relentless encouragement to push boundaries—and your belief that I am like a “bulldog” who never gives up—has become one of my core “concept neurons,” continuously fueling my creative and technical pursuits.

Lastly, to all my friends and faculty at
ITP, thank you for fostering a learning environment where unstructured exploration is not only allowed but celebrated. It is within this unique curriculum that I found the courage to dissolve boundaries between disciplines and to imagine human-computer interaction as a truly integrative, holistic system.

Self-Reflection

This is not just a story about the intersection of technology and neurobiology—it is about giving voice to the mute, giving wings to imagination, and opening a whole new channel in interactive telecommunication.

At its core, this project is not only about understanding the brain—it is about enhancing it. To rewire. To reshape. To co-author the architectures of our own cognition.

Each signal, each intention, each interaction—a stroke in the evolving masterpiece of your mind.

You are the author of your own neurosculpture.

— The End —

I: Background Story
iI: Model Architecture (electronic & programming & Fabrication)
iII: Future Plan & special Thanks