AI, Machine learning and self-driving cars
Recently, while working with high definition maps (HD Maps) and HAD Highly autonomous driving) concepts, noticed new approaches are emerging in our design, discovery, prototyping activities.
Designing experiences for AI and Robots will require completely new approaches, rarely we find new talent from Academic world trained with these approaches. For example, the concept that a vehicle can adapt to the driver the longer it is in use, while it is not new on online search, it is not yet visible on products such as vehicles.
Designers and engineers need to think how to expose training modes. That is, machines (robots, cars, etc) will need to have a great level of self-definition and adaptation, after shipping from manufacturing. Driver behavioural data becomes training input for existing ML pipelines, which can shorten the learning and baselining other alternative inputs. Today, companies such as Waymo, Uber, and Tesla, claim “millions of autonomous miles” driven. That is, the driver is actually in both cases labeling data, while on “autonomous” hypervisor role and while on active driving mode.
Drivers should be more exposed to inference cues emerging from the ML pipelines, more the driver understands the signals that emerge from these blackboxes (including many obscure CNN/RNN, and RL layers and filters) the better for the user. These signals, help driver understand machine intent and why such action is about to happen, and all this with timely precision. The car is, perhaps, the first and primary environment that humans will co-operate and make decisions together, thus, it sets the precedent of how we will learn to live with these automated-products/services in the future.
Creating maps from ground observations and labeling these data streams from thousands of sources, most of them taken at different time periods is a daunting task. Labeling workflows are complex, and the tools to support these tasks are also not readily available to satisfy several industry-specific use cases.
Additionally, Designers require to be working with dynamic data from these sources, passive driving behaviour as important input which is not readily available. Standard interaction design with preset logic flows no longer apply, instead Situational awareness cues need to be gathered and presented when they are relevant.
These are some of the recent tasks that my team worked to address some of these problems:
- Development of research agenda for situational awareness concepts assisting HAD modality, including computer vision and LIDAR for input in mixed mode rendering combining sensor fusion, HD map content and dynamic data from services such as traffic, IoT/Smart Cities, together with anonymized personal behavioural data feeds.
- Adapting and prototyping rendering engines (mostly Nvidia embedded hardware) and real-time design demonstrator tools to serve use cases during highly autonomous modes.
- Spearheading new design practices, leveraging abundant 3d data to validate use cases and blending engineering and design team efforts removing organizational barriers.
- Minimizing static data, which human brain is able to fill in, and elevating events, worthy of attention. That is, a semantic classifier on the fly.
Automotive data feeds is worth sharing for common good, that is the more data is shared the better we can train semantic classifiers, and with this, the inference becomes less error prone. This can only increase safety. Today, these are treasure troves kept in silos.
Use cases will emerge to provide Situational awareness design tools, moving beyond visual flow and lane identification patterns to more contexts to include the driver/passenger in the control loop. That is, drowsiness detection, or hands-off situations could be managed with much longer anticipation and improving ergonomic inclusion, for example with biometric sensors and better haptic/aural feedback.