Duration:
24 Hours
Contribution:
Research & Design
Utilization:
Vercel, Lovable & Figma
Every 13 minutes, someone dies in a distracted driving accident in the US. Current car safety systems bark warnings at drivers creating alert fatigue and destorying trust. What if safety systems could communicate like a co-pilot instead of an alarm?
Road conditions, weather, traffic patterns, and surrounding obstacles
Speed, lane position, proximity sensors, and vehicle dynamics
Attention level, stress indicators, reaction patterns, and preferences
Phase II
By intelligently selecting the most appropriate alert channel based on context, the system prevents information overload and ensures drivers can process critical warnings without distraction.
How might we design an AI-powered alert system that escalates naturally like a driving instructor adapting to each driver’s needs while preventing alert fatigue?
Personalized alerts with appropriate emotional tone improve driver comprehension and reduce reaction time by up to 40%, potentially preventing thousands of accidents annually.
Adaptive escalation prevents "alert fatigue" by only intensifying warnings when necessary, fostering a partnership between driver and safety system rather than perceived nagging.
Multi-modal feedback accommodates different learning styles, sensory preferences, and accessibility needs, ensuring safety features work effectively for all drivers.
Conceptual 3-stage escalation model integrating with Apple CarPlay/Android Auto Interface
System detects: Lane departure | Forward collision risk | Driver inattention | Speed variance Camera Analysis | Radar Sensors | Driver Monitoring
Immediate, subtle visual feedback through windshield projection.
2-3 seconds | No driver response
Immediate, subtle visual feedback through windshield projection.
Immediate, subtle visual feedback through windshield projection.
This escalation method is integral how we will ensure the driver is alert aware without shifting focus away from the main priority the road.
Phase ΙII
As the UX designer, I pushed for three critical innovations:
Most drivers already trust their phone's interface. Why not leverage that existing mental model instead of adding another screen to learn?
Inspired by video game controllers, I proposed subtle resistance that physically communicates 'slow down' without removing driver control.
Your body knows left from right instinctively. Rather than forcing eyes off the road to check a screen, the seat tells you which way to correct.
(Demo Layers)
Layer 1(0-2s): Subtle blue glow on her windshield’s right edge. Her peripheral vision catches it (no focus shift needed).
Multi-modal alerts combining visual icons, text context, and directional audio. Select an alert scenario to see the CarPlay and HUD integration.
Vehicle drifting left
(Apple CarPlay/Android Auto Display)
Take corrective action
(Toyota HUD Display)
Reduced distance ahead
(Apple CarPlay/Android Auto Display)
Take corrective action
(Toyota HUD Display)
Vehicle in right blind zone
(Apple CarPlay/Android Auto Display)
Take corrective action
(Toyota HUD Display)
Eyes off road detected
(Apple CarPlay/Android Auto Display)
Take corrective action
(Toyota HUD Display)
Layer 2 (2-5s): CarPlay shows a gentle arrow. Her seat vibrates on the right side. Spatial audio from the right speaker says ‘Lane drift.’
Layer 3 (5s+): Her steering wheel pulses with directional torque. Gas pedal firms up. If she still doesn’t respond? Emergency braking engages.
Click any button above to see how each safety system activates and provides feedback to the driver.
Phase IV
What We’d Validate Next
What Judges Taught Us
My Growth