Episode 16 – Voice in the Car

The car is a prime context for voice experiences, but it also comes with a number of unique challenges. Guest Shyamlala Prayaga shares a lot of key insights into the way to design for the car, including concepts around cognitive load requirements, regulation concerns, and design decisions.

Shyamala Prayaga

Guest – Shyamala Prayaga

Shyamala Prayaga is a user experience evangelist having experience designing mobile, web, desktop and voice-based interactions. Presently working at Ford motor company, she is helping shape the future of digital and physical spaces within Ford and Lincoln vehicles through voice and multimodal speech interface.

 

Links

Transcript

 

Jeremy Wilken 0:02
Welcome to design for voice. I’m your host, Jeremy Wilken. And today we’re going to be taking a look at voice in the car and how voice experiences extend inside of the automobile experience, and what kinds of things are different and unique about that context compared to other personal systems and invoice experiences that we know about today. I’m joined today by Shimla, Priyanka, and she’s with Ford Motor Company and willing to share some of her insights and experience in voice in the car. Welcome to the show.

Shyamlala Prayaga 0:32
Thank you, to me. It’s a pleasure being in the show.

Jeremy Wilken 0:37
Why don’t we kick it off, give us a little bit of your background and how you came to be part of the voice industry here.

Shyamlala Prayaga 0:43
Absolutely. So yeah, as Jeremy mentioned, I’m Shana Priyanka. And I’ve been in the industry for quite a long time now. So I started my career as a UX designer, and we’ll get a lot of companies, a lot of startups and big companies like Amazon, Mike. And so my boys experience started, when I started working with Amazon, and Alex, I was a secret project. So I started supporting some of the things, you know, Dave, and that’s how I learned about voice, voice experience, and how to design the voice, what isn’t and agents and all those different kind of things. So after I, after Amazon, I joined voice box, which was the conversational AI company, they had conversational AI platforms voice box is now acquired by new ones. But I worked with that company for quite a while. And then I designed a new income experiences for their clients. And I also worked on the drop in homeboys assistance. So that’s how I evolved my voice experience. And now you’re at Ford, I’m doing pretty much a lot of voice experience and conversation design. So my role here is have a product designer or product owner, and I designed the income will ease of use cases, like what country use cases we need in the car, and what is required to enable those and things get back.

Jeremy Wilken 2:05
Awesome. I’ve so many people have a tie to nuance, I just it’s always interesting to hear how people have been a part of the company or somehow tied to it. It’s, it’s fascinating. So thank you, again, for joining the show. And I wanted to start by just taking a look at what are the primary use cases for the car start just level set and understand the context that we’re in? Because it’s different from sitting at home, or even a workplace the car is a unique context, which has some really interesting use cases.

Shyamlala Prayaga 2:35
Absolutely, yeah, you said it, right. So see, you know, there has been quite a few domains in the car, and especially the use cases have been focusing on calling someone or navigation. So those were like the most important or primary use cases, which we have seen in the car. But then the use cases, you know, which are more relevant to the car nowadays is like for example, you know, like, while on my drive, I have so many meetings, and you know, like, I have to connect to my meetings. So now I connect using my bluetooth and my phone. But you know, voice can pretty much become the code to connect to meetings, or you know, like schedule an important meeting right on the fly, when I remember to do certain things or taking notes. When I’m driving, I remember like, Oh, I have to do this, or I have to schedule a meeting with someone. So you know, those kinds of use cases are pretty common. But in addition to these certain things, like you know, checking your regular information, like, you know, how much time pressure do I have? Or how much miles Can I go with my current fuel, or things like, you know, the customer care kind of things where, you know, like, I want to complain, or I want to get further information, why am I not able to connect to my bluetooth or things like that is more relevant, which is, you know, in context for the car. In addition to that, you know, checking for emails, or you know, like, say messaging someone, or reading messages from someone, things like that are also pretty relevant. Because, you know, I think people always tempted when they are driving, when they will, you know, their phone beeps to look at the message. So you know, like, it’s not safe when you’re driving. So what is can be a modality where you can say, like, what is my new message, and something along those lines, right. And in the car, the limitations are things like you have to look at the road, you’re in this bubble via, you know, not bubble, but in a vehicle that’s got a small contained space, so you have limited reach and access to things. What other things about the car experience itself, lend it to different types of voice experiences. So there are a lot of additional limitations. So one of the thing which you mentioned rightly was, you know, eyes on the road kind of thing. But even when I’m driving, and you know, I’m talking a lot, using the voice assistant, there could be cognitive load. And plus, you know, for example, if I said, find me nearby Starbucks, the definition of nearby could be 15 miles. So you know, you you may find, like 2030, different Starbucks. So you cannot say I found 15 Starbucks, the first one is blood, the second one is blind, you can’t just repeat that list. So you need a supplement screen, as well. But then that forces the user to look at screen as well. So you know, the interaction between the voice plus the screen also adds to the cognitive load, and then you know, the distractions when driving. So we need to look at how we can reduce that, then, in addition to that, car is just a modality. And it cannot work independently, it’s dependent on not of service providers, it’s dependent on connectivity, it’s dependent on, for example, the use case I was talking about for checking emails, or messages, it has to connect to your phone, like iOS or Android. So there are limitations when it comes to iOS and Android, in general, with the car, for example. What happens is, if you have connected your iOS, and anybody messages you got before you connected your phone, you will not be able to read those messages, or three those messages in the car. So only if you got a new message after you connected to the Bluetooth, that’s what can be, you know, read to you. So those kind of limitations do exist because of the platform, the invitation and things like that. So that adds to the experience and the challenges which we face when designing for boys in the cottage.

Jeremy Wilken 6:26
Do a lot of designers and developers trying to build things for the car, have ways around any of those limitations? Are there ways that they can design around them, at least in a way that, for example, you mentioned what they can actually hear and remember in the context, right, because we have short term memory and my cognitive load might hit a certain amount. So what kinds of things can we do to design around those limitations, we may not be able to get the technical limitation of my messages being limited to the most recent once, but I might be able to do something about the design around how much information I provide to a user.

Shyamlala Prayaga 7:05
Yeah, absolutely. So see, I would say like, machine learning and analytics could be a key in designing the experience when it comes to from design, yes, you have to break it down into giving the user I found 15 Starbucks, but here are the first two or first three, because, you know, people would not want to listen to the entire 15. But there could be some level of you know, like analytics data, which could be used to say like, you know, I preferred this kind of Starbucks or you know, like this location or something like that. So we can filter out the results, and then show the user, you know, like more relevant ones. So that’s one option. And then on the screen, also, we have limitations, because we can show up to a certain limit, we cannot show all the 15 because of the screen size first. The second thing is driver distraction. So limiting those as well. So based on the research, which I have done, I’ve seen, like people loan passionate after the first result. So whatever you present to them in the first result is what they use, because part of it could be I’m driving and I don’t want to go to the pagination. Or it could be that I don’t know, there are more results which are available. So the way I designed the dialogue, and the way you know, we have the templates, everything together with you know, some level of AI machine learning to filter out and smartly presented designs can assign for this problem.

Jeremy Wilken 8:26
Okay, so that means we’ve got to marry more than just good design patterns. But also think through how can we programmatically and dynamically in the moment, filter out based on the number of conditions have in front of us, which, you know, it might be how many options are there? How far away? Are some of them? What’s the traffic, like between point A and point B, and all of these options, and then decide on what’s the most relevant option for the user to reduce that cognitive load? Right? Well, so that makes, that makes us it much more than just designers of conversation. It’s also practitioners of designing the AI experience as well, or at least the algorithms that are behind the scenes and how they’re thinking through whatever limitations that do exist with the user. And make sure those things get surface through the results that they create, and generate so that the user doesn’t have to do as much thinking themselves.

Shyamlala Prayaga 9:18
Exactly, because think about it users are in the car. And their main task at that moment is to drive from point A to point B, all the other things they want to do is because you know, they want to be more productive while driving. So if we add more cognitive load, there have been research, which says like, you know, voice recognition is equally distracting as mobile phones or touch screens in the car. And the most important reason for that is the kind of cognitive load, it adds to the user. Even if you look at the older generation cars, like I have a Nissan, so every time I call someone, it gives me a list, but then it never announced me to touch any of those, and it asked me Please say your line number. So now I have to say I don’t lie number one, line number two, and then it makes the call, sometime it works accurately, sometimes what happens is, because in the past, I have saved, you know, like, five styles. So I have 1234. So when I say the first one, I say line one, it just use one, and then it just randomly dies too fast. And number which I have, and not the one which I have in the list. So now that adds to the frustration, I’m like, No, no, this is not what I meant. And then I try to cancel it. And sometimes I give up so much that I stopped using it. So by adding these kinds of experiences, and you know, like reducing the cognitive load, we are earning the trust. Plus, we are also you know, adding the utility aspect. So people will be more keen to use it because the system is much smarter now.

Jeremy Wilken 10:53
Right. And that’s one of the challenges my my car system is similar where it’s, it’s very specific, you have to say the right word or the right number, and it guides you through it. But the only way you know what the numbers are, is by looking at the screen, which to me seems a little bit separate, like I can’t, it does prevent while I’m driving, it does prevent you from touching the screen to do some things like setting a new navigation points, which is frustrating as well, if I have a passenger who could do it. And then I have to do it through voice I have to go through the clunky interface and it can take longer and be more frustrating. Ultimately, then I want to just use a different app.

Shyamlala Prayaga 11:33
Yeah, see, that’s exactly what happens. You brought up a really good point you’re about you know, like passengers. Nowadays, the cars are smart enough to even detect the passenger. And you know, reduce the level of driver distraction, if there is a way to, you know, enable those things, even for the voice and allow the passenger to take the passenger in the car and then allow them to navigate and interact. And, you know, use the touchscreen, the experience could be more simply finding.

Jeremy Wilken 12:02
Yeah, I was that was the next thing I wanted to ask more about was the context of a car, it’s a private space, but you might have other people in the car. And they might be family, they might be coworkers. If you’re a rideshare driver, it could be strangers, when you start thinking about the sense of privacy, personality, and how do you bring those things to the forefront and a place that’s usually private, but maybe not? Well, what are some considerations that you need to make there?

Shyamlala Prayaga 12:33
See, privacy is a really, you know, important topic, people talk about it a lot all the time. So definitely, you know, some of the things are, you know, like, if I asked for a message or dictation, or if I asked for certain things, which are unique to me, like my voice notes or things like that, if there are like other people, I may not be wanting to use that. Or maybe if someone else is trying to, you know, there are ways like using voice biometrics to detect, like I’m the person making the request and things like that could be used to solve for that. But then, the other thing I have seen based on the research is when someone is in the car, people are more likely not to use voice recognition, because they don’t feel comfortable with the kind of utterances they are using or things like that. And we need to break that mindset. So we need to design an experience in a way, which is you know, like, no matter who’s with you, you are comfortable talking to the assistant in a way you are comfortable talking to your friend or family.

Jeremy Wilken 13:32
That’s interesting, I can see that though. Because in addition, you’re in a conversation perhaps with somebody else. And so breaking that conversation to talk to the computer to dictate something or change the navigation or whatever the cases could seem awkward, versus talking to your friend about it. And maybe your friend pulls up the phone and makes those changes for you. So I can see that as it also just as a social cue that that’s kind of interrupts a conversation with a friend or a spouse or whoever. And flip over, it’s talking to the computer, although could make sense if we become more accustomed to it, and it doesn’t become like a social faux pas.

Shyamlala Prayaga 14:13
Right? Yeah, for example, if you look at, you know, like, wait a second devices at home, no matter who’s sitting in front of you, you are really comfortable talking to it, you know, setting a timer or calling someone, my son will do all sorts of Easter eggs, where he would be like, Who are you? How are you? Do you like me and things like that. So, you know, it’s it’s just a matter of, you know, being comfortable in the context. At home, we are really comfortable, you know, doing all those kind of things, the cars are not equipped or designed in a way, I would say that, you know, it can do the same kind of thing. It’s more of a utility, utility kind of, you know, things like, okay, I want to call someone I want to send a message or I just want to, you know, get navigation information. The use cases are so limited. If we you know, expand the limit use cases to a certain extent where it is relevant for the car, but at the same time entertaining, you know, some of those social cues could be avoided or reduced.

Jeremy Wilken 15:11
I’ve thought about those games, I used to play as a kid, we went on a couple of long trips, and you’re driving for hours and hours or days actually, and you play games, like find something that starts with the letter A and then be and everyone’s racing to find these items based on letters and you first gets the point and or you find every 50 states license plate, if you can, stuff like that. So car games and things like that. Those are very simple and probably more passenger games. But I is there really a case for there to be a lot of other entertainment other than I guess there’s DVD players and a lot of cars these days, but maybe you can control that, but we will play just pure voice entertainment that could happen through the car system?

Shyamlala Prayaga 16:02
Absolutely. You mentioned games, right? There are a lot more games which could be included, you know, in the context. So for example, every time we go for a long drive, my son is like, okay, play with me. And then you know, and he’ll do is buy and things like that. But then, you know, at some point, we want him to learn things. So I’m like, Okay, let’s do spellings. And then I would give him like, he’s in like, third grade. So now I’ll ask him spellings, like, what is the spelling of imagination or things like that, and he tries to spell that out for me. So you know, if the assistant can also help in these kind of learning when, you know, kids are in the car, and they want to play these kinds of things. Because as a driver, I will not be wanting to play games when I’m driving, but I want my kid to be engaged, so that he’s not disturbing me as much like he does not be Yeah, no, definitely play a game with me, please play, please play some like, Okay, so now I’m driving. And I’m so you know, not very comfortable on the field. So I get intimidated. So you know, the other thing could be, you know, good use case. So when, you know, it can do some sort of learning games and things like that and keep them engaged.

Jeremy Wilken 17:11
Certainly, I mean, I have young kids right now in there. It was funny, because the other day, my daughter just started yelling louder, to something like she thought that would help make it get louder. So their expectations of being able to use voice and in all kinds of places is, is interesting, I think they’re going to just assume that they can use voice commands to control stuff. And this was I think it was actually was like a Hallmark card that had a little music thing inside of it, she was yelling Louder, louder, it was not going to get any louder. It was it was really cute. But I could I know in the car, I’ve also heard the same thing of her yelling at you, hey, Alexa, play whatever music that she likes to play. And like she’s, you can’t do that. Get the context is different. And that she couldn’t actually those those features and functionality. But if she could on long road trips, that kind of gives her a little bit of control, it frees the driver, parents are often more distracted drivers than anybody else, because kids can take a lot of attention. I totally get it. And especially if you’re trying to remember how to spell imagination. I’m not a good speller to begin with. So putting that cognitive load on top of driving could actually be very stressful in some cases. So yeah, that’s really interesting, being able to take the cognitive load down by using voice but not by the driver but by passengers.

Shyamlala Prayaga 18:37
Yeah. And nowadays, it is possible to, you know, have multiple mics, of course, you know, when you look at the space, you know, again, there are challenges, and there are ways to mitigate that. But then now it is possible to have multiple mics, and then know where the noise is coming from, and then based on that respond just to that area, or things like that. So definitely the are ways to solve. I think the car companies need to start thinking and investing more on these kind of things, if they want to have a seamless experience in the car, before some other company comes and take over that experience.

Jeremy Wilken 19:13
Yeah, car manufacturers are unique, because they create the environment, they create the car, they create the whole space. And so anybody else like Google Apple, with the connected functionality of Car Play, or Android Auto, has to work within that context and environment. So they have the ability to create this first rate experience and embed it, you know, however, they want to give the full desired effect, but that requires, you know, strategic planning on not just how is should it work with the design, but then how do you manufacture into design the whole car itself, and future proof, I suppose, is also a big question. So I can see that’s a big challenge.

Shyamlala Prayaga 19:55
Yeah, definitely. Yeah, that that is, and now, you know, like, lot of autumn, a lot of companies are trying to get into the cloud. And you know, like, there are additional challenges, it’s not just the challenge of integration, there are challenges about who owns the data. And data has become, you know, the most important thing, you know, when it comes to any kind of consumer information, which people are looking for. So now there is also fight for data, which we also have to think about.

Jeremy Wilken 20:26
Let’s, let’s talk a little bit more about like the regulations, cars, automobiles, it makes sense. They’re highly regulated for safety reasons and things of that nature. What What do you have to deal with in order to ensure that you are in compliance that this voice actually help some of those issues? Or can it cause additional problems to arise?

Shyamlala Prayaga 20:49
Yeah, there are a lot of regulations and safety, you know, approvals which we have to take when we design something, and that changes region by region. So we have Mr. National Highway, Traffic Safety Administration, which, you know, define some of the rules around, you know, what needs to be in the car, and what is considered too much cognitive load or distraction for the user. So every time we have to design something, and interestingly, these driver distraction rules apply in US and EU. But if you go to China or India or other regions, they don’t have, you know, similar kind of regulations. So you know, you can have everything in the car. But when it comes to here, you know, it’s slightly different. And you also have to, you know, comply to those regulations. And make sure you know, it’s, it’s As for what we want. So for example, one of the thing is like the list items, right? Like how many list item Can you show at any given point. So driver distraction rules have, you know, up to eight glances or 21 steps, anything more than that is considered cognitive load, so we are not allowed to show. So those kind of things needs to be kept in mind when designing. So if I have to design the same thing for other regions, I’m able to present more list items, I’m able to do a lot more things. Recently, we had this experience where, you know, how we have visual cues in the cars. So, you know, we have cues for listening, speaking, processing, and the animations, even though small animations in the car is considered distracting. So anything in the car, you know, any moving object in the car has to add some value. So, you know, we were asked to remove the speaking state, because the assistant is speaking back to you. So there’s no point having any kind of, you know, speaking state in the car, because that adds to the cognitive load. So we have to deal with these kind of things where, you know, we fought back a lot. And we lost the battle, because safety was more important. And they came back saying like, nope, we cannot. Because Give me the rational like, why do we need speaking state when the assistant is already speaking to you? And you know that so, you know, we have to do these kind of things as well.

Jeremy Wilken 23:00
I’m wondering how somebody who’s not able to go through and who’s not part of the company or regulation process that can dive into that can learn more about those types of rules and patterns? Because that sounds like, possibly out there somewhere? Is there a place to learn more about what those regulations are? And how to apply them?

Shyamlala Prayaga 23:20
Yeah, so the Mr. website, National Highway Traffic Safety Administration website, if you go there, they have guidelines around, you know, like, they have like, huge guidelines and standards around the driver distraction and safety and what needs to be considered and things like that. So you know, that is your first starting point to learn about some of these things as to like, how many things can you show what needs to be shown what is considered distraction, so we can avoid those kind of things when designing in the car?

Jeremy Wilken 23:49
Awesome. Okay, I’ll make sure to find that link and post it in the show notes. And take a look at myself, because that sounds like a really interesting, educational moment. And the last topic I wanted to dive into before we wrap up, the show is actually going back to the display, we’ve talked about the display a couple of different times. Most cars today have something there and it different sizes and shapes, and but there’s a display and most types of cars these days. And what is the right balance between having that visual display of information? How does it become distracting? And how do you know I guess we were talking about this with the regulation piece, but how do you know at what point have you hit the right balance between voice or visual.

Shyamlala Prayaga 24:40
So the experience has to be multimodal, and we need to balance it. So for example, if unless and until they will also disambiguate anonymous recognitions or things like that, you know, certain things could be just, you know, confirmed or paid. For example, if I said con Jeremy and you know, you’re the only contract in my list and it knows that and it can just be like, okay, calling, unless, you know, I have some sort of sitting around New and I’m like, okay, always confirm before you call someone, kind of thing. So we don’t have to, again, show templates or screens in those scenarios, because you can just use voice or you know, certain things like nowadays, every company is trying to add some sort of, you know, like climate commands in the cars, like, turn on the AC turn off the AC or, you know, like, you know, certain things like those you don’t have to show screens for those things, just a verbal communication or confirmation will be enough in those scenarios. versus when there is December gration, I said, you know, call mom, and now that, you know, I have a three year old, I have all the moms, you know, like all his classmates, mom. So the way I have saved it is like Jason’s mom, Jeremy, his mom, or something like that. So when I say call mom, I would get all of the mom lyst. Because the assistant is not smart enough to know, which mom that I mean, and you know, it does not want to just 35 to something, you know, they accidentally because that can add to the frustration. So now in both scenarios, you need to show some sort of list, but also allow, like, what is the maximum number you because you found 50 different moms in the contact list, you don’t have to show everything. So you have to again, use the cognitive machine learning AI kind of algorithms to pick the right ones based on the these are the moms I’ve been contacting often. And then show those kinds of things. So that will, you know, allow the user to select the contact, but at the same time, not information, overload them with someone they have not called in, like five years.

Jeremy Wilken 26:37
Right? So finding the way to present that information when you need to break it down and see it more clearly. Versus we can make a pretty good guess that this is what you meant. And so let’s either just do a quick confirmation. And if we get it wrong, did we do the disambiguate stuff on the screen? Yes. What about the screen having touch capabilities, so there’s different types of screens, some of them might not be able to touch, but some might. And you can occasionally use that to select places to navigate to, like, it could be faster for me to quickly tap the navigation screen that’s going on, in order to see the overview, then to try and ask and listen to it. Because I just want to know, maybe what the next road is that’s coming up. Yeah. So how did that how does that get balanced? And in the mix as well, because sometimes I want to be able to just quickly see my path, am I going the way that I think I’m going to go down this road versus that road, which I know is under construction, or wanting to hear that out loud?

Shyamlala Prayaga 27:45
Yeah, so definitely, you know, the way we have designed our experiences, it is multimodal through touch and voice interactions. So you know, we need to me, we cannot block the user and say like, this is the only modality you can use. So we need enable them, we need to give them as many ways to do that. So, for example, when you present the list, you want to see further information about it, you can just click on it. Or if you want to call the second one, or you know, what is the routing for the third one, you know, it should allow you to do that, at the same time, I should be able to tap on the list item to get more information. And that is the you know, we uncomfortable doing it or if there’s a passenger, you know, who wants to control it. So, that is how, you know you have to think about your experience like not only the you know, multimodal interactions and voice, but you know, accessibility is always under look when it comes to in car experiences. So you know, how can we add that aspect as well, so that anyone you know, who have any kind of physical or situational disabled, they can also use the boys in the car with alternate modalities, including the visual cues, audio cues, and things like that.

Jeremy Wilken 28:56
All right, we need to wrap up the show. And this is the endpoint detection of the show where we recap a little bit and get to learn just a little bit more about you. So if we can start what’s the top takeaway that you think that listeners should take from this conversation?

Shyamlala Prayaga 29:10
So I would say there are a lot more unique use cases in the car, which we can focus on, which is, you know, different than the In Home kind of scenarios, like I mentioned, you know, calling someone and roadside maintenance could be one of those, and things like that. But then the biggest thing is, you know, like when designing for in car experiences, look at the guidelines, the regulations, look at the limitations and see how we can mitigate that using, you know, ai machine learning capabilities, which are available now.

Jeremy Wilken 29:42
Yeah, the cognitive load test coming up in the conversation, how much do you put on the user? The driver, usually? And how much can you craft around that experience to simplify, provide really quality suggestions up front, so there’s no pagination required, and things of that nature. So I think that’s a really, really good takeaway, especially in the car. And next question for you then is, what’s an interesting voice experience that you’ve had or seen recently?

Shyamlala Prayaga 30:12
So recently, I read about, you know, McDonald, testing, voice activated vitals in Chicago, and they are, you know, like, you can order things through voice. And you know, on the other side, there is an agent, which is taking your order and fulfilling them. So if you know, there was an interesting way to have agent to agent where my car agent is able to talk to the, you know, voice activated right through agent and fulfill the order and knowing that these are the kind of things I like, I think, would be really interesting. But I really found this interesting, like, you know, voice is moving away from home and con and going to other use cases, like, you know, breakthroughs and things like that is really interesting.

Jeremy Wilken 30:55
Yeah, I think that’s a sign of the times where things are going to be at enhancing augmented outside of the home, I’m really curious to see how those trials go long term. And I think that’s going to be a really cool experience, although it’s also a tough one. Because there’s a lot of things that people say and do and you know, and drive through or in certain contexts that really have to be nailed down. So the question is, what kinds of resources do you recommend for anybody who would like to learn more about voice design? So um,

Shyamlala Prayaga 31:29
yeah, I mean, we might have heard a lot of books, but I want to talk about the conversational designer course, which I recently took from conversation Academy, which focuses on you know, like, the copywriting techniques, what is the best way to design the voice assistant and think about the different, you know, standard vocabulary around how the board should introduce or disambiguate, or you know, how it should present lyst and things like that. So this course is excellent for someone who wants to die into conversation interaction design, and known about the different copywriting techniques.

Jeremy Wilken 32:05
Great. I think that makes sense for not just designers, but also developers, because a lot of developers might be working on their own. Or even if you’re working with a set of designers, not everybody has that same background. So learning more about it from even development side, I think would be a huge boost to the quality of our voice experiences. And lastly, how can people learn more about you and your work?

Shyamlala Prayaga 32:31
Um, so yeah, LinkedIn, I’m very active on LinkedIn. So you can just look up for me by my first name, and last name, Shamma Priyanka. And then I could show you know, a strong guy is my handle. So you can you know, I’m not very active on Twitter once in a while I’ll post, but then LinkedIn is my most, you know, common way to interact with people.

Jeremy Wilken 32:56
Excellent. Well, this has been a really great show. I really like the level setting of the context of the car and what we can take away. I think there’s a lot of things that are true, even outside of the car, but the car context just brings up the ante in the requirements for making sure you reduce cognitive load, making sure you think about the privacy and who am I talking to, and being succinct and clear with your commands, and not just from user perspective, but also regulatory perspective, too. So. Alright, so that’ll do it for this show. I just wanted to thank you again for coming and sharing all this information.

Shyamlala Prayaga 33:33
Thank you, Jeremy. I really enjoyed talking to you.