🎧 Intro: Signal, Noise, and The Space Between
What I listened to while building this system:
Walking into the Poeppel Lab doesn't feel like entering a typical psychology workspace.
It feels like standing at a crossroads — where the messy reality of human biology meets the precise logic of computational systems.
I arrived here with a dual curiosity:
- How do humans hold onto connection when the medium glitches?
- How do machines learn to "hear" the world as we do?
This role isn't just about running code or scheduling participants.
It is about measuring the fidelity of perception.
🌊 Part I: The Human Glitch
Context: The Videoconferencing & Interaction Study
We live our lives through screens now. But digital connection is fragile.
The lab’s mission was to quantify that fragility.
We weren't just looking at "lag"; we were looking at the breakdown of social rhythm.
The Task
How much jitter does it take to kill a conversation?
How much delay before empathy turns into frustration?
The Action
I became the architect of these digital imperfections.
- Orchestrating Friction: I configured environments to simulate precise network degradations—injecting controlled chaos into smooth signals.
- Capturing the Invisible: I didn't just log data; I monitored the subtle shifts in human interaction—the awkward pauses, the missed cues, the moments where technology got in the way of humanity.
It taught me that in HCI, the most important metric isn't bandwidth. It’s coherence.
📡 Part II: The Machine Ear
Context: Advanced Audio Feature Representation (STM)
While Part I explored how humans struggle with signals, Part II explores how machines master them.
The lab had already established a breakthrough: Spectrotemporal Modulation (STM).
They proved that this interpretable feature set could classify sound incredibly well, even with a simple model.
The Challenge
But we also wanted to know: What is the limit?
If a simple brain (MLP) can do this well, what happens when we give the system a more complex consciousness?
The Action (Ongoing)
I am currently pushing the boundaries of the STM framework.
- Beyond the Baseline: Moving away from simple perceptrons, I am implementing advanced deep learning architectures capable of capturing long-range dependencies and global context.
- Architectural Exploration: I am engineering complex sequence models to test the scalability of STM features.
- The Goal: To discover if adding computational depth yields a proportional gain in auditory understanding.
I’m not just training a classifier; I’m trying to see if a machine can learn the "texture" of sound, not just the frequency.
🧩 The Synthesis
These two projects seem different—one is Psychology, one is AI.
But to me, they are the same inquiry.
One looks at biological intelligence struggling with noise.
The other looks at artificial intelligence trying to find the signal.
Status: Ongoing.
Tools:
Python PyTorch Experiment Design Signal Processing