Neurosculptor is an interactive neuro-symbolic installation that lets you see how your everyday digital habits literally rewire your brain — in real time.
Over the past seven years, my work has explored the convergence of cognitive science, adaptive technology, and behavioral data. We live in an age where technology is no longer just a tool — it’s an extension of our cognition, quietly reshaping how we think, feel, and focus.
Neurosculptor sits at this intersection, translating invisible neurochemical and network-level changes into a tangible, sensory experience that invisible impact of modern life on our minds both felt and understood.
The central challenge was transforming the invisible complexity of cognitive neuroscience into an experience anyone could intuitively grasp — without oversimplifying the science. This meant solving three interconnected problems:
1. Scientific Coherence — building a symbolic architecture faithful to real neurocognitive processes, from neurotransmitter activity to network dynamics.
2. Artistic Translation — rendering those processes in light, motion, and form in a way that would resonate emotionally as well as intellectually.
3. Behavioral Accessibility — designing an interaction flow that lets non-experts immediately “read” their own cognitive-emotional state through the sculpture.
To achieve this, I developed a JSON-based symbolic mapping system — a “behavioral brain” — informed by architectures like ACT-R and Affective Computing. This enabled dynamic representations of attention shifts, memory retrieval, and decision-making pathways, all mapped to sensory cues within the sculpture.
Neurosculptor was born from a deep question: Can the mind’s invisible processes be made tangible?
In Spring 2024, during my master’s at NYU, I joined Professor David Poeppel’s Ph.D. panel on “The Origins of Cognitive Neuroscience” — an experience that profoundly shaped this project. With a background in cognitive psychology and a fascination for psychobiological mechanisms, I explored how abstract mental processes could be represented in physical form.
For my final paper, The Evolution of Cognitive Representation: The Visualization of the Mind, I examined methods for mapping thought into perceivable patterns. That work became the scientific and conceptual backbone for Neurosculptor — grounding its symbolic architecture in both rigorous neuroscience and an artistic vision for making cognition visible.
The first concept for this project was called Eudaimonia — inspired by Dr. Cal Newport’s Deep Work. In Ancient Greek, the term means “good spirit” or “flourishing,” describing a state where you realize your full human potential. I set out to explore how constant digital stimulation, information overload, and chronic multitasking impair our ability to reach deep focus — and how we might reclaim it.
To ground the concept, I conducted:
• 25 student interviews (undergraduate, graduate, PhD) and 4 professor interviews across 3 NYU departments
• Scientific literature reviews on neuroplasticity, default mode network, reward circuitry, and executive control
• Analysis of public datasets on digital media exposure and cognitive functioning, revealing measurable trends in attentional drift, memory decline, and task-switching impairments
The breakthrough insight: people weren’t as motivated by how to focus better — they were more fascinated by why deep work matters and how their habits physically reshape their brain. That shift in curiosity led me to abandon the “focus app” concept for now and instead create an educational-artistic installation rooted in psychobiological modeling and symbolic AI.
Check all the references (books and papers) here
Neurosculptor turns your everyday digital habits into a living, glowing sculpture — a mirror for your mind.
When you interact with it, you choose from real-life scenarios like deep work, mindful breathing, multitasking, or endless social media scrolling. As soon as you make a choice, the sculpture responds with light and motion patterns mapped to the neurochemical rhythms and brain network activity associated with that behavior.
• Erratic bursts mimic dopamine spikes from rapid scrolling
• Soft breathing glows mirror calming GABA signals from mindfulness
• Shifting pulses reflect attention shifts, memory retrieval, and decision-making pathways
The user experience unfolds in three stages:
1. Curiosity — Users explore scenarios through a simple touchscreen menu
2. Reflection — The sculpture “plays back” their chosen state, turning invisible brain activity into a sensory performance of light, motion, and rhythm
3. Insight — A personalized prompt invites them to reflect on how their habits shape their cognitive and emotional patterns
By bridging neuroscience, symbolic AI, and emotional design, Neurosculptor makes complex mental processes not only visible, but felt — creating an experience that’s part science demo, part meditation, part self-discovery.
The sculpture’s form — a hybrid of digital fabrication and hand-sculpting — was inspired by the fluidity of human cognition. Its surface glows, pulses, and morphs, with each light pattern directly tied to a neurochemical rhythm or cognitive signal.
• Fast, erratic bursts capture dopamine spikes from rapid scrolling
• Slow, breathing pulses mirror the calming effect of GABA during mindfulness
• Shifting wave patterns represent the dynamic ebb and flow of attention and memory
This approach ensures that the aesthetics are not decoration — they are data embodied. Every curve, glow, and movement carries both scientific meaning and emotional weight, creating an experience that resonates on multiple levels at once.
Behind the sculpture’s fluid light and motion is a tightly integrated hardware system that turns user actions into symbolic “brain activity.”When a participant selects a scenario on the touchscreen, the system translates that input into a series of symbolic neural activations. These activations trigger LED patterns, colors, and motion sequences mapped to specific brain regions and cognitive pathways.
Core components:
• Raspberry Pi — the central controller running the symbolic AI model
• Touchscreen interface — scenario selection and interaction input
• LED matrix + optic fibers — rendering brain states as dynamic light patterns
• Cognitive pathway matrix — mapping each scenario to brain functions (e.g., ACC for attention, VTA for reward, Insula for emotion)
This physical computing layer ensures that every visual output isn’t just aesthetic — it’s a live, symbolic representation of cognitive and neurochemical processes.
The behavioral “brain” of Neurosculptor is a Python-based symbolic mapping engine, designed to translate user actions into changes in simulated neurotransmitter levels, brain region activations, and network dynamics.
At the core is a nested JSON architecture that stores:
• Behavioral scenarios (e.g., multitasking, deep work, mindful breathing, falling in love, dreaming)
• Linked brain regions (20 in total, including ACC, dlPFC, VTA, and Insula)
• Neural pathways (8 key networks, such as the Default Mode Network, Salience Network, and Task-Positive Network)
• Neurotransmitter effects (e.g., dopamine spikes, cortisol release, acetylcholine shifts)
How it works:
1. User selects a scenario via the touchscreen interface.
2. The JSON model triggers predefined neural activations and neurotransmitter changes associated with that behavior.
3. These activations drive LED animation sequences, color shifts, and pulse rhythms — all tied to the symbolic cognitive pathway matrix.
4. The Raspberry Pi sends commands to the LED matrix and optic fibers to render the state in real time.
This symbolic approach allows Neurosculptor to simulate complex mental states without requiring invasive neuroimaging — making it an accessible yet scientifically grounded way to visualize the brain’s adaptive processes.
The interaction was designed to be intuitive, emotionally engaging, and accessible to a wide range of participants — from curious passersby to cognitive scientists.
Flow:
1. Entry Point — Curiosity
• Users are greeted with a simple touchscreen menu offering behavioral scenarios (e.g., mindful breathing, multitasking, social media scrolling).
• Scenarios are phrased in plain language so no prior neuroscience knowledge is required.
2. Immersion — Real-Time Reflection
• Upon selection, the sculpture immediately responds with light patterns, rhythms, and motions that represent the chosen cognitive-emotional state.
• Feedback is multi-sensory (light, motion, rhythm) to deepen engagement.
3. Closure — Insight & Reflection
• After the visual sequence, a personalized reflection prompt appears, encouraging users to consider how their habits shape their brain over time.
Accessibility Features:
• Color + motion redundancy — key signals are represented through both color and movement so participants with color vision deficiencies can still interpret them.
• Simple language labels — avoids jargon while still being scientifically accurate.
• Immediate feedback — ensures users can connect their action to the brain-state output without delay.
The result is an experience that is easy to enter, memorable to engage with, and personal to leave — ensuring that the science behind Neurosculptor doesn’t just stay in the mind, but lands in the heart.
Neurosculptor’s physical form was built to embody the complexity and adaptability of the human mind — fluid, layered, and responsive. The build combined digital fabrication for precision with hand-sculpting for organic texture and emotional presence.
Core fabrication elements:
• 3D-printed cerebral model — representing the structural foundation of the brain
• Hand-sculpted body shell — adding organic form and tactile depth
• Integrated LED diffusers & wire channels — ensuring smooth, lifelike light distribution
• Modular mounting — for Raspberry Pi, LED matrix, and future sensor expansions
• Materials — clear resin, clay, acrylic paint, UV resin, optic fibers, 20 LEDs, 7-inch monitor, 5V power cable
Build philosophy:
Every material and fabrication decision was made to reinforce the “mind as living architecture” concept:
• Clear resin and optic fibers diffuse light like neural pathways carrying electrical impulses
• The shell’s hand-sculpted contours mimic the irregular yet patterned folds of the cortex
• Modular hardware design ensures the sculpture can evolve alongside new sensing capabilities
This approach allowed the Neurosculptor to function as both a scientific model and an evocative art piece — a fusion of engineering precision and human craft.
The next evolution of Neurosculptor is to move from symbolic simulation into real-time cognitive sensing. Planned upgrades include:
• Eye-tracking module — measuring pupil dilation and gaze patterns to detect unconscious shifts in attention and emotion
• Bio-sensing integration — adding heart rate variability and skin conductance to capture physiological markers of stress, focus, and calm
• Adaptive feedback loops — enabling the sculpture to change its output in real time as the user’s state changes
These enhancements will transform Neurosculptor from an educational-artistic installation into a neuroadaptive system— capable of providing live cognitive feedback for use in:
• Corporate wellness programs — helping teams monitor and manage digital fatigue
• Education — teaching students how focus, distraction, and emotion interact in the brain
• Healthcare — aiding neurorehabilitation and mindfulness-based interventions
I would like to extend my deepest gratitude to Professor David Poeppel, whose intellectual courage inspired me to embark on this complex project. His ongoing stimulation of my thinking within the realms of cognitive science and human-computer interaction has been a guiding force throughout this journey.
To Professor Danny Rozin, thank you for teaching me everything I know about physical computing and digital fabriction. Your relentless encouragement to push boundaries—and your belief that I am like a “bulldog” who never gives up—has become one of my core “concept neurons,” continuously fueling my creative and technical pursuits.
Lastly, to all my friends and faculties at ITP, thank you for fostering a learning environment where unstructured exploration is not only allowed but celebrated. It is within this unique curriculum that I found the courage to dissolve boundaries between disciplines and to imagine human-computer interaction as a truly integrative, holistic system.
Neurosculptor is more than a visualization tool — it is a mirror for the mind.
Building it taught me the value of combining rigorous science with artistic storytelling to make complexity emotionally resonant and actionable.
It’s not just about understanding the brain — it’s about co-authoring the architecture of your own cognition.
Every signal, every choice, every interaction becomes a stroke in the evolving masterpiece of your mind.